Sample records for execution monitoring framework

  1. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  2. Toward a More Flexible Web-Based Framework for Multidisciplinary Design

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Salas, A. O.

    1999-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary design, is defined as a hardware-software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, monitoring, controlling, and displaying the design process. The objective of this research is to explore how Web technology can improve these areas of weakness and lead toward a more flexible framework. This article describes a Web-based system that optimizes and controls the execution sequence of design processes in addition to monitoring the project status and displaying the design results.

  3. A Web-Based Monitoring System for Multidisciplinary Design Projects

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Salas, Andrea O.; Weston, Robert P.

    1998-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary computational environments, is defined as a hardware and software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, integrated with an existing framework, can improve these areas of weakness. This paper describes a Web-based system that optimizes and controls the execution sequence of design processes; and monitors the project status and results. The three-stage evolution of the system with increasingly complex problems demonstrates the feasibility of this approach.

  4. Rule-Based Runtime Verification

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework for defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. Our logic, EAGLE, is implemented as a Java library and involves novel techniques for rule definition, manipulation and execution. Monitoring is done on a state-by-state basis, without storing the execution trace.

  5. The SERENITY Runtime Monitoring Framework

    NASA Astrophysics Data System (ADS)

    Spanoudakis, George; Kloukinas, Christos; Mahbub, Khaled

    This chapter describes SERENITY’s approach to runtime monitoring and the framework that has been developed to support it. Runtime monitoring is required in SERENITY in order to check for violations of security and dependability properties which are necessary for the correct operation of the security and dependability solutions that are available from the SERENITY framework. This chapter discusses how such properties are specified and monitored. The chapter focuses on the activation and execution of monitoring activities using S&D Patterns and the actions that may be undertaken following the detection of property violations. The approach is demonstrated in reference to one of the industrial case studies of the SERENITY project.

  6. The SERENITY Runtime Framework

    NASA Astrophysics Data System (ADS)

    Crespo, Beatriz Gallego-Nicasio; Piñuela, Ana; Soria-Rodriguez, Pedro; Serrano, Daniel; Maña, Antonio

    The SERENITY Runtime Framework (SRF) provides support for applications at runtime, by managing S&D Solutions and monitoring the systems’ context. The main functionality of the SRF, amongst others, is to provide S&D Solutions, by means of Executable Components, in response to applications security requirements. Runtime environment is defined in SRF through the S&D Library and Context Manager components. S&D Library is a local S&D Artefact repository, and stores S&D Classes, S&D Patterns and S&D Implementations. The Context Manager component is in charge of storing and management of the information used by the SRF to select the most appropriate S&D Pattern for a given scenario. The management of the execution of the Executable Component, as running realizations of the S&D Patterns, including instantiation, de-activation and control, as well as providing communication and monitoring mechanisms, besides the recovery and reconfiguration aspects, complete the list of tasks performed by the SRF.

  7. A lightweight messaging-based distributed processing and workflow execution framework for real-time and big data analysis

    NASA Astrophysics Data System (ADS)

    Laban, Shaban; El-Desouky, Aly

    2014-05-01

    To achieve a rapid, simple and reliable parallel processing of different types of tasks and big data processing on any compute cluster, a lightweight messaging-based distributed applications processing and workflow execution framework model is proposed. The framework is based on Apache ActiveMQ and Simple (or Streaming) Text Oriented Message Protocol (STOMP). ActiveMQ , a popular and powerful open source persistence messaging and integration patterns server with scheduler capabilities, acts as a message broker in the framework. STOMP provides an interoperable wire format that allows framework programs to talk and interact between each other and ActiveMQ easily. In order to efficiently use the message broker a unified message and topic naming pattern is utilized to achieve the required operation. Only three Python programs and simple library, used to unify and simplify the implementation of activeMQ and STOMP protocol, are needed to use the framework. A watchdog program is used to monitor, remove, add, start and stop any machine and/or its different tasks when necessary. For every machine a dedicated one and only one zoo keeper program is used to start different functions or tasks, stompShell program, needed for executing the user required workflow. The stompShell instances are used to execute any workflow jobs based on received message. A well-defined, simple and flexible message structure, based on JavaScript Object Notation (JSON), is used to build any complex workflow systems. Also, JSON format is used in configuration, communication between machines and programs. The framework is platform independent. Although, the framework is built using Python the actual workflow programs or jobs can be implemented by any programming language. The generic framework can be used in small national data centres for processing seismological and radionuclide data received from the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Also, it is possible to extend the use of the framework in monitoring the IDC pipeline. The detailed design, implementation,conclusion and future work of the proposed framework will be presented.

  8. Sequentially Executed Model Evaluation Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as partmore » of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  9. The Use of Executive Control Processes in Engineering Design by Engineering Students and Professional Engineers

    ERIC Educational Resources Information Center

    Dixon, Raymond A.; Johnson, Scott D.

    2012-01-01

    A cognitive construct that is important when solving engineering design problems is executive control process, or metacognition. It is a central feature of human consciousness that enables one "to be aware of, monitor, and control mental processes." The framework for this study was conceptualized by integrating the model for creative design, which…

  10. Component Framework for Loosely Coupled High Performance Integrated Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Elwasif, W. R.; Bernholdt, D. E.; Shet, A. G.; Batchelor, D. B.; Foley, S.

    2010-11-01

    We present the design and implementation of a component-based simulation framework for the execution of coupled time-dependent plasma modeling codes. The Integrated Plasma Simulator (IPS) provides a flexible lightweight component model that streamlines the integration of stand alone codes into coupled simulations. Standalone codes are adapted to the IPS component interface specification using a thin wrapping layer implemented in the Python programming language. The framework provides services for inter-component method invocation, configuration, task, and data management, asynchronous event management, simulation monitoring, and checkpoint/restart capabilities. Services are invoked, as needed, by the computational components to coordinate the execution of different aspects of coupled simulations on Massive parallel Processing (MPP) machines. A common plasma state layer serves as the foundation for inter-component, file-based data exchange. The IPS design principles, implementation details, and execution model will be presented, along with an overview of several use cases.

  11. Design and implementation of monitoring studies to evaluate the success of ecological restoration on wildlife

    Treesearch

    William M. Block; Alan B. Franklin; James P. Ward; Joseph L. Ganey; Gary C. White

    2001-01-01

    Restoration projects are often developed with little consideration for understanding their effects on wildlife. We contend, however, that monitoring treatment effects on wildlife should be an integral component of the design and execution of any management activity, including restoration. Thus, we provide a conceptual framework for the design and implementation of...

  12. Kepler Science Operations Center Pipeline Framework

    NASA Technical Reports Server (NTRS)

    Klaus, Todd C.; McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Middour, Christopher; Caldwell, Douglas A.; Jenkins, Jon M.

    2010-01-01

    The Kepler mission is designed to continuously monitor up to 170,000 stars at a 30 minute cadence for 3.5 years searching for Earth-size planets. The data are processed at the Science Operations Center (SOC) at NASA Ames Research Center. Because of the large volume of data and the memory and CPU-intensive nature of the analysis, significant computing hardware is required. We have developed generic pipeline framework software that is used to distribute and synchronize the processing across a cluster of CPUs and to manage the resulting products. The framework is written in Java and is therefore platform-independent, and scales from a single, standalone workstation (for development and research on small data sets) to a full cluster of homogeneous or heterogeneous hardware with minimal configuration changes. A plug-in architecture provides customized control of the unit of work without the need to modify the framework itself. Distributed transaction services provide for atomic storage of pipeline products for a unit of work across a relational database and the custom Kepler DB. Generic parameter management and data accountability services are provided to record the parameter values, software versions, and other meta-data used for each pipeline execution. A graphical console allows for the configuration, execution, and monitoring of pipelines. An alert and metrics subsystem is used to monitor the health and performance of the pipeline. The framework was developed for the Kepler project based on Kepler requirements, but the framework itself is generic and could be used for a variety of applications where these features are needed.

  13. Man-Robot Symbiosis: A Framework For Cooperative Intelligence And Control

    NASA Astrophysics Data System (ADS)

    Parker, Lynne E.; Pin, Francois G.

    1988-10-01

    The man-robot symbiosis concept has the fundamental objective of bridging the gap between fully human-controlled and fully autonomous systems to achieve true man-robot cooperative control and intelligence. Such a system would allow improved speed, accuracy, and efficiency of task execution, while retaining the man in the loop for innovative reasoning and decision-making. The symbiont would have capabilities for supervised and unsupervised learning, allowing an increase of expertise in a wide task domain. This paper describes a robotic system architecture facilitating the symbiotic integration of teleoperative and automated modes of task execution. The architecture reflects a unique blend of many disciplines of artificial intelligence into a working system, including job or mission planning, dynamic task allocation, man-robot communication, automated monitoring, and machine learning. These disciplines are embodied in five major components of the symbiotic framework: the Job Planner, the Dynamic Task Allocator, the Presenter/Interpreter, the Automated Monitor, and the Learning System.

  14. An active monitoring method for flood events

    NASA Astrophysics Data System (ADS)

    Chen, Zeqiang; Chen, Nengcheng; Du, Wenying; Gong, Jianya

    2018-07-01

    Timely and active detecting and monitoring of a flood event are critical for a quick response, effective decision-making and disaster reduction. To achieve the purpose, this paper proposes an active service framework for flood monitoring based on Sensor Web services and an active model for the concrete implementation of the active service framework. The framework consists of two core components-active warning and active planning. The active warning component is based on a publish-subscribe mechanism implemented by the Sensor Event Service. The active planning component employs the Sensor Planning Service to control the execution of the schemes and models and plans the model input data. The active model, called SMDSA, defines the quantitative calculation method for five elements, scheme, model, data, sensor, and auxiliary information, as well as their associations. Experimental monitoring of the Liangzi Lake flood in the summer of 2010 is conducted to test the proposed framework and model. The results show that 1) the proposed active service framework is efficient for timely and automated flood monitoring. 2) The active model, SMDSA, is a quantitative calculation method used to monitor floods from manual intervention to automatic computation. 3) As much preliminary work as possible should be done to take full advantage of the active service framework and the active model.

  15. Mission Data System Java Edition Version 7

    NASA Technical Reports Server (NTRS)

    Reinholtz, William K.; Wagner, David A.

    2013-01-01

    The Mission Data System framework defines closed-loop control system abstractions from State Analysis including interfaces for state variables, goals, estimators, and controllers that can be adapted to implement a goal-oriented control system. The framework further provides an execution environment that includes a goal scheduler, execution engine, and fault monitor that support the expression of goal network activity plans. Using these frameworks, adapters can build a goal-oriented control system where activity coordination is verified before execution begins (plan time), and continually during execution. Plan failures including violations of safety constraints expressed in the plan can be handled through automatic re-planning. This version optimizes a number of key interfaces and features to minimize dependencies, performance overhead, and improve reliability. Fault diagnosis and real-time projection capabilities are incorporated. This version enhances earlier versions primarily through optimizations and quality improvements that raise the technology readiness level. Goals explicitly constrain system states over explicit time intervals to eliminate ambiguity about intent, as compared to command-oriented control that only implies persistent intent until another command is sent. A goal network scheduling and verification process ensures that all goals in the plan are achievable before starting execution. Goal failures at runtime can be detected (including predicted failures) and handled by adapted response logic. Responses can include plan repairs (try an alternate tactic to achieve the same goal), goal shedding, ignoring the fault, cancelling the plan, or safing the system.

  16. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.

    2012-12-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.

  17. A Web-Based System for Monitoring and Controlling Multidisciplinary Design Projects

    NASA Technical Reports Server (NTRS)

    Salas, Andrea O.; Rogers, James L.

    1997-01-01

    In today's competitive environment, both industry and government agencies are under enormous pressure to reduce the time and cost of multidisciplinary design projects. A number of frameworks have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. An examination of current frameworks reveals weaknesses in various areas such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, in conjunction with an existing framework, can improve these areas of weakness. This paper describes a system that executes a sequence of programs, monitors and controls the design process through a Web-based interface, and visualizes intermediate and final results through the use of Java(Tm) applets. A small sample problem, which includes nine processes with two analysis programs that are coupled to an optimizer, is used to demonstrate the feasibility of this approach.

  18. Grid Task Execution

    NASA Technical Reports Server (NTRS)

    Hu, Chaumin

    2007-01-01

    IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.

  19. Command and Control Rapid Prototyping Continuum (C2RPC): The Framework for Achieving a New C2 Strategy

    DTIC Science & Technology

    2011-06-01

    Sync Matrix Assessing J/ADOCS (Fires) TBMCS (ATO) Executing Monitoring (SA) C2 Strategy Objectives • Provide Expanded Mission Management...Computers, and Intelligence T&E Test and Evaluation PMW150 Program Warfare Office Command and Control TBMCS Theater Battle Management Core System POR

  20. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    NASA Astrophysics Data System (ADS)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-12-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.

  1. The Dynamic Multiprocess Framework: Evidence from Prospective Memory with Contextual Variability

    PubMed Central

    Scullin, Michael K.; McDaniel, Mark A.; Shelton, Jill Talley

    2013-01-01

    The ability to remember to execute delayed intentions is referred to as prospective memory. Previous theoretical and empirical work has focused on isolating whether a particular prospective memory task is supported either by effortful monitoring processes or by cue-driven spontaneous processes. In the present work, we advance the Dynamic Multiprocess Framework, which contends that both monitoring and spontaneous retrieval may be utilized dynamically to support prospective remembering. To capture the dynamic interplay between monitoring and spontaneous retrieval we had participants perform many ongoing tasks and told them that their prospective memory cue may occur in any context. Following either a 20-min or a 12-hr retention interval, the prospective memory cues were presented infrequently across three separate ongoing tasks. The monitoring patterns (measured as ongoing task cost relative to a between-subjects control condition) were consistent and robust across the three contexts. There was no evidence for monitoring prior to the initial prospective memory cue; however, individuals who successfully spontaneously retrieved the prospective memory intention, thereby realizing that prospective memory cues could be expected within that context, subsequently monitored. These data support the Dynamic Multiprocess Framework, which contends that individuals will engage monitoring when prospective memory cues are expected, disengage monitoring when cues are not expected, and that when monitoring is disengaged, a probabilistic spontaneous retrieval mechanism can support prospective remembering. PMID:23916951

  2. A framework for addressing implementation gap in global drowning prevention interventions: experiences from Bangladesh.

    PubMed

    Hyder, Adnan A; Alonge, Olakunle; He, Siran; Wadhwaniya, Shirin; Rahman, Fazlur; El Arifeen, Shams

    2014-12-01

    Drowning is the commonest cause of injury-related deaths among under-five children worldwide, and 95% of deaths occur in low- and middle-income countries (LMICs) where there are implementation gaps in the drowning prevention interventions. This article reviews common interventions for drowning prevention, introduces a framework for effective implementation of such interventions, and describes the Saving of Lives from Drowning (SoLiD) Project in Bangladesh, which is based on this framework. A review of the systematic reviews on drowning interventions was conducted, and original research articles were pulled and summarized into broad prevention categories. The implementation framework builds upon two existing frameworks and categorizes the implementing process for drowning prevention interventions into four phases: planning, engaging, executing, and evaluating. Eleven key characteristics are mapped in these phases. The framework was applied to drowning prevention projects that have been undertaken in some LMICs to illustrate major challenges to implementation. The implementation process for the SoLiD Project in Bangladesh is used as an example to illustrate the practical utilization of the framework. Drowning interventions, such as pool fencing and covering of water hazards, are effective in high-income countries; however, most of these interventions have not been tested in LMICs. The critical components of the four phases of implementing drowning prevention interventions may include: (i) planning-global funding, political will, scale, sustainability, and capacity building; (ii) engaging-coordination, involvement of appropriate individuals; (iii) executing-focused action, multisectoral actions, quality of execution; and (iv) evaluating-rigorous monitoring and evaluation. Some of the challenges to implementing drowning prevention interventions in LMICs include insufficient funds, lack of technical capacity, and limited coordination among stakeholders and implementers. The SoLiD Project in Bangladesh incorporates some of these lessons and key features of the proposed framework. The framework presented in this paper was a useful tool for implementing drowning prevention interventions in Bangladesh and may be useful for adaptation in drowning and injury prevention programmes of other LMIC settings.

  3. Intelligent sensor and controller framework for the power grid

    DOEpatents

    Akyol, Bora A.; Haack, Jereme Nathan; Craig, Jr., Philip Allen; Tews, Cody William; Kulkarni, Anand V.; Carpenter, Brandon J.; Maiden, Wendy M.; Ciraci, Selim

    2015-07-28

    Disclosed below are representative embodiments of methods, apparatus, and systems for monitoring and using data in an electric power grid. For example, one disclosed embodiment comprises a sensor for measuring an electrical characteristic of a power line, electrical generator, or electrical device; a network interface; a processor; and one or more computer-readable storage media storing computer-executable instructions. In this embodiment, the computer-executable instructions include instructions for implementing an authorization and authentication module for validating a software agent received at the network interface; instructions for implementing one or more agent execution environments for executing agent code that is included with the software agent and that causes data from the sensor to be collected; and instructions for implementing an agent packaging and instantiation module for storing the collected data in a data container of the software agent and for transmitting the software agent, along with the stored data, to a next destination.

  4. Intelligent sensor and controller framework for the power grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akyol, Bora A.; Haack, Jereme Nathan; Craig, Jr., Philip Allen

    Disclosed below are representative embodiments of methods, apparatus, and systems for monitoring and using data in an electric power grid. For example, one disclosed embodiment comprises a sensor for measuring an electrical characteristic of a power line, electrical generator, or electrical device; a network interface; a processor; and one or more computer-readable storage media storing computer-executable instructions. In this embodiment, the computer-executable instructions include instructions for implementing an authorization and authentication module for validating a software agent received at the network interface; instructions for implementing one or more agent execution environments for executing agent code that is included with themore » software agent and that causes data from the sensor to be collected; and instructions for implementing an agent packaging and instantiation module for storing the collected data in a data container of the software agent and for transmitting the software agent, along with the stored data, to a next destination.« less

  5. A Collaborative Extensible User Environment for Simulation and Knowledge Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Lansing, Carina S.; Porter, Ellen A.

    2015-06-01

    In scientific simulation, scientists use measured data to create numerical models, execute simulations and analyze results from advanced simulators executing on high performance computing platforms. This process usually requires a team of scientists collaborating on data collection, model creation and analysis, and on authorship of publications and data. This paper shows that scientific teams can benefit from a user environment called Akuna that permits subsurface scientists in disparate locations to collaborate on numerical modeling and analysis projects. The Akuna user environment is built on the Velo framework that provides both a rich client environment for conducting and analyzing simulations andmore » a Web environment for data sharing and annotation. Akuna is an extensible toolset that integrates with Velo, and is designed to support any type of simulator. This is achieved through data-driven user interface generation, use of a customizable knowledge management platform, and an extensible framework for simulation execution, monitoring and analysis. This paper describes how the customized Velo content management system and the Akuna toolset are used to integrate and enhance an effective collaborative research and application environment. The extensible architecture of Akuna is also described and demonstrates its usage for creation and execution of a 3D subsurface simulation.« less

  6. Security and Dependability Solutions for Web Services and Workflows

    NASA Astrophysics Data System (ADS)

    Kokolakis, Spyros; Rizomiliotis, Panagiotis; Benameur, Azzedine; Sinha, Smriti Kumar

    In this chapter we present an innovative approach towards the design and application of Security and Dependability (S&D) solutions for Web services and service-based workflows. Recently, several standards have been published that prescribe S&D solutions for Web services, e.g. OASIS WS-Security. However,the application of these solutions in specific contexts has been proven problematic. We propose a new framework for the application of such solutions based on the SERENITY S&D Pattern concept. An S&D Pattern comprises all the necessary information for the implementation, verification, deployment, and active monitoring of an S&D Solution. Thus, system developers may rely on proven solutions that are dynamically deployed and monitored by the Serenity Runtime Framework. Finally, we further extend this approach to cover the case of executable workflows which are realised through the orchestration of Web services.

  7. Dual pathways to prospective remembering

    PubMed Central

    McDaniel, Mark A.; Umanath, Sharda; Einstein, Gilles O.; Waldum, Emily R.

    2015-01-01

    According to the multiprocess framework (McDaniel and Einstein, 2000), the cognitive system can support prospective memory (PM) retrieval through two general pathways. One pathway depends on top–down attentional control processes that maintain activation of the intention and/or monitor the environment for the triggering or target cues that indicate that the intention should be executed. A second pathway depends on (bottom–up) spontaneous retrieval processes, processes that are often triggered by a PM target cue; critically, spontaneous retrieval is assumed not to require monitoring or active maintenance of the intention. Given demand characteristics associated with experimental settings, however, participants are often inclined to monitor, thereby potentially masking discovery of bottom–up spontaneous retrieval processes. In this article, we discuss parameters of laboratory PM paradigms to discourage monitoring and review recent behavioral evidence from such paradigms that implicate spontaneous retrieval in PM. We then re-examine the neuro-imaging evidence from the lens of the multiprocess framework and suggest some critical modifications to existing neuro-cognitive interpretations of the neuro-imaging results. These modifications illuminate possible directions and refinements for further neuro-imaging investigations of PM. PMID:26236213

  8. XML-Based Visual Specification of Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad

    2001-01-01

    The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.

  9. Thinking as the control of imagination: a conceptual framework for goal-directed systems.

    PubMed

    Pezzulo, Giovanni; Castelfranchi, Cristiano

    2009-07-01

    This paper offers a conceptual framework which (re)integrates goal-directed control, motivational processes, and executive functions, and suggests a developmental pathway from situated action to higher level cognition. We first illustrate a basic computational (control-theoretic) model of goal-directed action that makes use of internal modeling. We then show that by adding the problem of selection among multiple action alternatives motivation enters the scene, and that the basic mechanisms of executive functions such as inhibition, the monitoring of progresses, and working memory, are required for this system to work. Further, we elaborate on the idea that the off-line re-enactment of anticipatory mechanisms used for action control gives rise to (embodied) mental simulations, and propose that thinking consists essentially in controlling mental simulations rather than directly controlling behavior and perceptions. We conclude by sketching an evolutionary perspective of this process, proposing that anticipation leveraged cognition, and by highlighting specific predictions of our model.

  10. A Cloud-Based Simulation Architecture for Pandemic Influenza Simulation

    PubMed Central

    Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas

    2011-01-01

    High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089

  11. EAGLE Monitors by Collecting Facts and Generating Obligations

    NASA Technical Reports Server (NTRS)

    Barrnger, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework, called EAGLE, that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. A monitor for an EAGLE formula checks if a finite trace of states satisfies the given formula. We present, in details, an algorithm for the synthesis of monitors for EAGLE. The algorithm is implemented as a Java application and involves novel techniques for rule definition, manipulation and execution. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace of states. Our initial experiments have been successful as EAGLE detected a previously unknown bug while testing a planetary rover controller.

  12. The Design and Development of an Intelligent Planning Aid

    DTIC Science & Technology

    1986-07-01

    reasons why widening the scope of TACPLAK’s applicability make sense. First# plan execution and monitoring (and the re-planning that then occurs) are...Orsssnu, contracting officer’s representative I», KKY voees o Decision Making Tactical Planning Taxonomy Problem Solving ii M ifrntitr *r MM* I...planning aid. It documents the development of a decision- making , planning, and decision-aiding analytical framework comprising a set of models, s generic

  13. DCF(Registered)-A JAUS and TENA Compliant Agent-Based Framework for Test and Evaluation of Unmanned Vehicles

    DTIC Science & Technology

    2011-03-01

    functions of the vignette editor include visualizing the state of the UAS team, creating T&E scenarios, monitoring the UAS team performance, and...These behaviors are then executed by the robot sequentially (Figure 2). A state machine mission editor allows mission builders to use behaviors from the...include control, robotics, distributed applications, multimedia applications, databases, design patterns, and software engineering. Mr. Lenzi is the

  14. A Framework for Resource and Execution Management

    DTIC Science & Technology

    2011-06-01

    Topic( s ): Topic 4: Information and Knowledge Exploitation Name of Author( s ): Abdeslem Boukhtouta Mohamad Allouche Point of Contact...AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Defence R&D Canada - Valcartier...2459 Pie-XI North,Qu?c, QC, CANADA, G3J 1X5, 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10

  15. Processing of the WLCG monitoring data using NoSQL

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  16. Automatic Imitation in Rhythmical Actions: Kinematic Fidelity and the Effects of Compatibility, Delay, and Visual Monitoring

    PubMed Central

    Eaves, Daniel L.; Turgeon, Martine; Vogt, Stefan

    2012-01-01

    We demonstrate that observation of everyday rhythmical actions biases subsequent motor execution of the same and of different actions, using a paradigm where the observed actions were irrelevant for action execution. The cycle time of the distractor actions was subtly manipulated across trials, and the cycle time of motor responses served as the main dependent measure. Although distractor frequencies reliably biased response cycle times, this imitation bias was only a small fraction of the modulations in distractor speed, as well as of the modulations produced when participants intentionally imitated the observed rhythms. Importantly, this bias was not only present for compatible actions, but was also found, though numerically reduced, when distractor and executed actions were different (e.g., tooth brushing vs. window wiping), or when the dominant plane of movement was different (horizontal vs. vertical). In addition, these effects were equally pronounced for execution at 0, 4, and 8 s after action observation, a finding that contrasts with the more short-lived effects reported in earlier studies. The imitation bias was also unaffected when vision of the hand was occluded during execution, indicating that this effect most likely resulted from visuomotor interactions during distractor observation, rather than from visual monitoring and guidance during execution. Finally, when the distractor was incompatible in both dimensions (action type and plane) the imitation bias was not reduced further, in an additive way, relative to the single-incompatible conditions. This points to a mechanism whereby the observed action’s impact on motor processing is generally reduced whenever this is not useful for motor planning. We interpret these findings in the framework of biased competition, where intended and distractor actions can be represented as competing and quasi-encapsulated sensorimotor streams. PMID:23071623

  17. ClusterControl: a web interface for distributing and monitoring bioinformatics applications on a Linux cluster.

    PubMed

    Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko

    2004-03-22

    ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl

  18. Online production validation in a HEP environment

    NASA Astrophysics Data System (ADS)

    Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.

    2017-03-01

    In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.

  19. Plan Execution Interchange Language (PLEXIL)

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Jonsson, Ari; Pasareanu, Corina; Simmons, Reid; Tso, Kam; Verma, Vandi

    2006-01-01

    Plan execution is a cornerstone of spacecraft operations, irrespective of whether the plans to be executed are generated on board the spacecraft or on the ground. Plan execution frameworks vary greatly, due to both different capabilities of the execution systems, and relations to associated decision-making frameworks. The latter dependency has made the reuse of execution and planning frameworks more difficult, and has all but precluded information sharing between different execution and decision-making systems. As a step in the direction of addressing some of these issues, a general plan execution language, called the Plan Execution Interchange Language (PLEXIL), is being developed. PLEXIL is capable of expressing concepts used by many high-level automated planners and hence provides an interface to multiple planners. PLEXIL includes a domain description that specifies command types, expansions, constraints, etc., as well as feedback to the higher-level decision-making capabilities. This document describes the grammar and semantics of PLEXIL. It includes a graphical depiction of this grammar and illustrative rover scenarios. It also outlines ongoing work on implementing a universal execution system, based on PLEXIL, using state-of-the-art rover functional interfaces and planners as test cases.

  20. AliEn—ALICE environment on the GRID

    NASA Astrophysics Data System (ADS)

    Saiz, P.; Aphecetche, L.; Bunčić, P.; Piskač, R.; Revsbech, J.-E.; Šego, V.; Alice Collaboration

    2003-04-01

    AliEn ( http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system.

  1. Telerobotic management system: coordinating multiple human operators with multiple robots

    NASA Astrophysics Data System (ADS)

    King, Jamie W.; Pretty, Raymond; Brothers, Brendan; Gosine, Raymond G.

    2003-09-01

    This paper describes an application called the Tele-robotic management system (TMS) for coordinating multiple operators with multiple robots for applications such as underground mining. TMS utilizes several graphical interfaces to allow the user to define a partially ordered plan for multiple robots. This plan is then converted to a Petri net for execution and monitoring. TMS uses a distributed framework to allow robots and operators to easily integrate with the applications. This framework allows robots and operators to join the network and advertise their capabilities through services. TMS then decides whether tasks should be dispatched to a robot or a remote operator based on the services offered by the robots and operators.

  2. Monitoring of services with non-relational databases and map-reduce framework

    NASA Astrophysics Data System (ADS)

    Babik, M.; Souto, F.

    2012-12-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  3. A modular telerobotic task execution system

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Tso, Kam S.; Hayati, Samad; Lee, Thomas S.

    1990-01-01

    A telerobot task execution system is proposed to provide a general parametrizable task execution capability. The system includes communication with the calling system, e.g., a task planning system, and single- and dual-arm sensor-based task execution with monitoring and reflexing. A specific task is described by specifying the parameters to various available task execution modules including trajectory generation, compliance control, teleoperation, monitoring, and sensor fusion. Reflex action is achieved by finding the corresponding reflex action in a reflex table when an execution event has been detected with a monitor.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Amjad Majid; Albert, Don; Andersson, Par

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.

  5. Improving Air Force Command and Control Through Enhanced Agile Combat Support Planning, Execution, Monitoring, and Control Processes

    DTIC Science & Technology

    2012-01-01

    Tripp, Lionel A. Galway , Timothy L. Ramey, Mahyar A. Amouzegar, and Eric Peltz (MR-1179-AF), 2000. This report describes a vision for the combat...Postures, Lionel  A. Galway , Robert S. Tripp, Timothy  L. Ramey, and John G. Drew (MR-1075-AF), 2000. This report describes how alternative resourcing of...Aerospace Forces: An Integrated Strate- gic Agile Combat Support Planning Framework, Robert S. Tripp, Lionel  A. Galway , Paul Killingsworth, Eric Peltz

  6. Self-assembled software and method of overriding software execution

    DOEpatents

    Bouchard, Ann M.; Osbourn, Gordon C.

    2013-01-08

    A computer-implemented software self-assembled system and method for providing an external override and monitoring capability to dynamically self-assembling software containing machines that self-assemble execution sequences and data structures. The method provides an external override machine that can be introduced into a system of self-assembling machines while the machines are executing such that the functionality of the executing software can be changed or paused without stopping the code execution and modifying the existing code. Additionally, a monitoring machine can be introduced without stopping code execution that can monitor specified code execution functions by designated machines and communicate the status to an output device.

  7. The “Common Solutions” Strategy of the Experiment Support group at CERN for the LHC Experiments

    NASA Astrophysics Data System (ADS)

    Girone, M.; Andreeva, J.; Barreiro Megino, F. H.; Campana, S.; Cinquilli, M.; Di Girolamo, A.; Dimou, M.; Giordano, D.; Karavakis, E.; Kenyon, M. J.; Kokozkiewicz, L.; Lanciotti, E.; Litmaath, M.; Magini, N.; Negri, G.; Roiser, S.; Saiz, P.; Saiz Santos, M. D.; Schovancova, J.; Sciabà, A.; Spiga, D.; Trentadue, R.; Tuckett, D.; Valassi, A.; Van der Ster, D. C.; Shiers, J. D.

    2012-12-01

    After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management monitoring, File Transfer monitoring) and the Site Status Board. This talk focuses primarily on the strategic aspects of providing such common solutions and how this relates to the overall goals of long-term sustainability and the relationship to the various WLCG Technical Evolution Groups. The success of the service components has given us confidence in the process, and has developed the trust of the stakeholders. We are now attempting to expand the development of common solutions into the more critical workflows. The first is a feasibility study of common analysis workflow execution elements between ATLAS and CMS. We look forward to additional common development in the future.

  8. An Autonomous Control System for an Intra-Vehicular Spacecraft Mobile Monitor Prototype

    NASA Technical Reports Server (NTRS)

    Dorais, Gregory A.; Desiano, Salvatore D.; Gawdiak, Yuri; Nicewarner, Keith

    2003-01-01

    This paper presents an overview of an ongoing research and development effort at the NASA Ames Research Center to create an autonomous control system for an internal spacecraft autonomous mobile monitor. It primary functions are to provide crew support and perform intra- vehicular sensing activities by autonomously navigating onboard the International Space Station. We describe the mission roles and high-level functional requirements for an autonomous mobile monitor. The mobile monitor prototypes, of which two are operational and one is actively being designed, physical test facilities used to perform ground testing, including a 3D micro-gravity test facility, and simulators are briefly described. We provide an overview of the autonomy framework and describe each of its components, including those used for automated planning, goal-oriented task execution, diagnosis, and fault recovery. A sample mission test scenario is also described.

  9. Executive Function in Preschoolers: A Review Using an Integrative Framework

    ERIC Educational Resources Information Center

    Garon, Nancy; Bryson, Susan E.; Smith, Isabel M.

    2008-01-01

    During the last 2 decades, major advances have been made in understanding the development of executive functions (EFs) in early childhood. This article reviews the EF literature during the preschool period using an integrative framework. The framework adopted considers EF to be a unitary construct with partially dissociable components (A. Miyake…

  10. Framework for Integrating Science Data Processing Algorithms Into Process Control Systems

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Crichton, Daniel J.; Chang, Albert Y.; Foster, Brian M.; Freeborn, Dana J.; Woollard, David M.; Ramirez, Paul M.

    2011-01-01

    A software framework called PCS Task Wrapper is responsible for standardizing the setup, process initiation, execution, and file management tasks surrounding the execution of science data algorithms, which are referred to by NASA as Product Generation Executives (PGEs). PGEs codify a scientific algorithm, some step in the overall scientific process involved in a mission science workflow. The PCS Task Wrapper provides a stable operating environment to the underlying PGE during its execution lifecycle. If the PGE requires a file, or metadata regarding the file, the PCS Task Wrapper is responsible for delivering that information to the PGE in a manner that meets its requirements. If the PGE requires knowledge of upstream or downstream PGEs in a sequence of executions, that information is also made available. Finally, if information regarding disk space, or node information such as CPU availability, etc., is required, the PCS Task Wrapper provides this information to the underlying PGE. After this information is collected, the PGE is executed, and its output Product file and Metadata generation is managed via the PCS Task Wrapper framework. The innovation is responsible for marshalling output Products and Metadata back to a PCS File Management component for use in downstream data processing and pedigree. In support of this, the PCS Task Wrapper leverages the PCS Crawler Framework to ingest (during pipeline processing) the output Product files and Metadata produced by the PGE. The architectural components of the PCS Task Wrapper framework include PGE Task Instance, PGE Config File Builder, Config File Property Adder, Science PGE Config File Writer, and PCS Met file Writer. This innovative framework is really the unifying bridge between the execution of a step in the overall processing pipeline, and the available PCS component services as well as the information that they collectively manage.

  11. Bonsai: an event-based framework for processing and controlling data streams

    PubMed Central

    Lopes, Gonçalo; Bonacchi, Niccolò; Frazão, João; Neto, Joana P.; Atallah, Bassam V.; Soares, Sofia; Moreira, Luís; Matias, Sara; Itskov, Pavel M.; Correia, Patrícia A.; Medina, Roberto E.; Calcaterra, Lorenza; Dreosti, Elena; Paton, Joseph J.; Kampff, Adam R.

    2015-01-01

    The design of modern scientific experiments requires the control and monitoring of many different data streams. However, the serial execution of programming instructions in a computer makes it a challenge to develop software that can deal with the asynchronous, parallel nature of scientific data. Here we present Bonsai, a modular, high-performance, open-source visual programming framework for the acquisition and online processing of data streams. We describe Bonsai's core principles and architecture and demonstrate how it allows for the rapid and flexible prototyping of integrated experimental designs in neuroscience. We specifically highlight some applications that require the combination of many different hardware and software components, including video tracking of behavior, electrophysiology and closed-loop control of stimulation. PMID:25904861

  12. Runtime verification of embedded real-time systems.

    PubMed

    Reinbacher, Thomas; Függer, Matthias; Brauer, Jörg

    We present a runtime verification framework that allows on-line monitoring of past-time Metric Temporal Logic (ptMTL) specifications in a discrete time setting. We design observer algorithms for the time-bounded modalities of ptMTL, which take advantage of the highly parallel nature of hardware designs. The algorithms can be translated into efficient hardware blocks, which are designed for reconfigurability, thus, facilitate applications of the framework in both a prototyping and a post-deployment phase of embedded real-time systems. We provide formal correctness proofs for all presented observer algorithms and analyze their time and space complexity. For example, for the most general operator considered, the time-bounded Since operator, we obtain a time complexity that is doubly logarithmic both in the point in time the operator is executed and the operator's time bounds. This result is promising with respect to a self-contained, non-interfering monitoring approach that evaluates real-time specifications in parallel to the system-under-test. We implement our framework on a Field Programmable Gate Array platform and use extensive simulation and logic synthesis runs to assess the benefits of the approach in terms of resource usage and operating frequency.

  13. Web Program for Development of GUIs for Cluster Computers

    NASA Technical Reports Server (NTRS)

    Czikmantory, Akos; Cwik, Thomas; Klimeck, Gerhard; Hua, Hook; Oyafuso, Fabiano; Vinyard, Edward

    2003-01-01

    WIGLAF (a Web Interface Generator and Legacy Application Facade) is a computer program that provides a Web-based, distributed, graphical-user-interface (GUI) framework that can be adapted to any of a broad range of application programs, written in any programming language, that are executed remotely on any cluster computer system. WIGLAF enables the rapid development of a GUI for controlling and monitoring a specific application program running on the cluster and for transferring data to and from the application program. The only prerequisite for the execution of WIGLAF is a Web-browser program on a user's personal computer connected with the cluster via the Internet. WIGLAF has a client/server architecture: The server component is executed on the cluster system, where it controls the application program and serves data to the client component. The client component is an applet that runs in the Web browser. WIGLAF utilizes the Extensible Markup Language to hold all data associated with the application software, Java to enable platform-independent execution on the cluster system and the display of a GUI generator through the browser, and the Java Remote Method Invocation software package to provide simple, effective client/server networking.

  14. Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

    NASA Astrophysics Data System (ADS)

    Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan

    2014-06-01

    The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the involved GRID queues. CEs which do not meet the set criteria can be removed from the production chain by including them in an exception table. All of these monitoring actions lead to a more reliable and faster execution of MC requests.

  15. Development and Implementation of a Telecommuting Evaluation Framework, and Modeling the Executive Telecommuting Adoption Process

    NASA Astrophysics Data System (ADS)

    Vora, V. P.; Mahmassani, H. S.

    2002-02-01

    This work proposes and implements a comprehensive evaluation framework to document the telecommuter, organizational, and societal impacts of telecommuting through telecommuting programs. Evaluation processes and materials within the outlined framework are also proposed and implemented. As the first component of the evaluation process, the executive survey is administered within a public sector agency. The survey data is examined through exploratory analysis and is compared to a previous survey of private sector executives. The ordinal probit, dynamic probit, and dynamic generalized ordinal probit (DGOP) models of telecommuting adoption are calibrated to identify factors which significantly influence executive adoption preferences and to test the robustness of such factors. The public sector DGOP model of executive willingness to support telecommuting under different program scenarios is compared with an equivalent private sector DGOP model. Through the telecommuting program, a case study of telecommuting travel impacts is performed to further substantiate research.

  16. Timing divided attention.

    PubMed

    Hogendoorn, Hinze; Carlson, Thomas A; VanRullen, Rufin; Verstraten, Frans A J

    2010-11-01

    Visual attention can be divided over multiple objects or locations. However, there is no single theoretical framework within which the effects of dividing attention can be interpreted. In order to develop such a model, here we manipulated the stage of visual processing at which attention was divided, while simultaneously probing the costs of dividing attention on two dimensions. We show that dividing attention incurs dissociable time and precision costs, which depend on whether attention is divided during monitoring or during access. Dividing attention during monitoring resulted in progressively delayed access to attended locations as additional locations were monitored, as well as a one-off precision cost. When dividing attention during access, time costs were systematically lower at one of the accessed locations than at the other, indicating that divided attention during access, in fact, involves rapid sequential allocation of undivided attention. We propose a model in which divided attention is understood as the simultaneous parallel preparation and subsequent sequential execution of multiple shifts of undivided attention. This interpretation has the potential to bring together diverse findings from both the divided-attention and saccade preparation literature and provides a framework within which to integrate the broad spectrum of divided-attention methodologies.

  17. Composable Framework Support for Software-FMEA Through Model Execution

    NASA Astrophysics Data System (ADS)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  18. Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M.

    2009-09-09

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.

  19. An architecture for heuristic control of real-time processes

    NASA Technical Reports Server (NTRS)

    Raulefs, P.; Thorndyke, P. W.

    1987-01-01

    Abstract Process management combines complementary approaches of heuristic reasoning and analytical process control. Management of a continuous process requires monitoring the environment and the controlled system, assessing the ongoing situation, developing and revising planned actions, and controlling the execution of the actions. For knowledge-intensive domains, process management entails the potentially time-stressed cooperation among a variety of expert systems. By redesigning a blackboard control architecture in an object-oriented framework, researchers obtain an approach to process management that considerably extends blackboard control mechanisms and overcomes limitations of blackboard systems.

  20. Unified Desktop for Monitoring & Control Applications - The Open Navigator Framework Applied for Control Centre and EGSE Applications

    NASA Astrophysics Data System (ADS)

    Brauer, U.

    2007-08-01

    The Open Navigator Framework (ONF) was developed to provide a unified and scalable platform for user interface integration. The main objective for the framework was to raise usability of monitoring and control consoles and to provide a reuse of software components in different application areas. ONF is currently applied for the Columbus onboard crew interface, the commanding application for the Columbus Control Centre, the Columbus user facilities specialized user interfaces, the Mission Execution Crew Assistant (MECA) study and EADS Astrium internal R&D projects. ONF provides a well documented and proven middleware for GUI components (Java plugin interface, simplified concept similar to Eclipse). The overall application configuration is performed within a graphical user interface for layout and component selection. The end-user does not have to work in the underlying XML configuration files. ONF was optimized to provide harmonized user interfaces for monitoring and command consoles. It provides many convenience functions designed together with flight controllers and onboard crew: user defined workspaces, incl. support for multi screens efficient communication mechanism between the components integrated web browsing and documentation search &viewing consistent and integrated menus and shortcuts common logging and application configuration (properties) supervision interface for remote plugin GUI access (web based) A large number of operationally proven ONF components have been developed: Command Stack & History: Release of commands and follow up the command acknowledges System Message Panel: Browse, filter and search system messages/events Unified Synoptic System: Generic synoptic display system Situational Awareness : Show overall subsystem status based on monitoring of key parameters System Model Browser: Browse mission database defintions (measurements, commands, events) Flight Procedure Executor: Execute checklist and logical flow interactive procedures Web Browser : Integrated browser reference documentation and operations data Timeline Viewer: View master timeline as Gantt chart Search: Local search of operations products (e.g. documentation, procedures, displays) All GUI components access the underlying spacecraft data (commanding, reporting data, events, command history) via a common library providing adaptors for the current deployments (Columbus MCS, Columbus onboard Data Management System, Columbus Trainer raw packet protocol). New Adaptors are easy to develop. Currently an adaptor to SCOS 2000 is developed as part of a study for the ESTEC standardization section ("USS for ESTEC Reference Facility").

  1. An ontological knowledge framework for adaptive medical workflow.

    PubMed

    Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir

    2008-10-01

    As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.

  2. Plan execution monitoring with distributed intelligent agents for battle command

    NASA Astrophysics Data System (ADS)

    Allen, James P.; Barry, Kevin P.; McCormick, John M.; Paul, Ross A.

    2004-07-01

    As military tactics evolve toward execution centric operations the ability to analyze vast amounts of mission relevant data is essential to command and control decision making. To maintain operational tempo and achieve information superiority we have developed Vigilant Advisor, a mobile agent-based distributed Plan Execution Monitoring system. It provides military commanders with continuous contingency monitoring tailored to their preferences while overcoming the network bandwidth problem often associated with traditional remote data querying. This paper presents an overview of Plan Execution Monitoring as well as a detailed view of the Vigilant Advisor system including key features and statistical analysis of resource savings provided by its mobile agent-based approach.

  3. JTSA: an open source framework for time series abstractions.

    PubMed

    Sacchi, Lucia; Capozzi, Davide; Bellazzi, Riccardo; Larizza, Cristiana

    2015-10-01

    The evaluation of the clinical status of a patient is frequently based on the temporal evolution of some parameters, making the detection of temporal patterns a priority in data analysis. Temporal abstraction (TA) is a methodology widely used in medical reasoning for summarizing and abstracting longitudinal data. This paper describes JTSA (Java Time Series Abstractor), a framework including a library of algorithms for time series preprocessing and abstraction and an engine to execute a workflow for temporal data processing. The JTSA framework is grounded on a comprehensive ontology that models temporal data processing both from the data storage and the abstraction computation perspective. The JTSA framework is designed to allow users to build their own analysis workflows by combining different algorithms. Thanks to the modular structure of a workflow, simple to highly complex patterns can be detected. The JTSA framework has been developed in Java 1.7 and is distributed under GPL as a jar file. JTSA provides: a collection of algorithms to perform temporal abstraction and preprocessing of time series, a framework for defining and executing data analysis workflows based on these algorithms, and a GUI for workflow prototyping and testing. The whole JTSA project relies on a formal model of the data types and of the algorithms included in the library. This model is the basis for the design and implementation of the software application. Taking into account this formalized structure, the user can easily extend the JTSA framework by adding new algorithms. Results are shown in the context of the EU project MOSAIC to extract relevant patterns from data coming related to the long term monitoring of diabetic patients. The proof that JTSA is a versatile tool to be adapted to different needs is given by its possible uses, both as a standalone tool for data summarization and as a module to be embedded into other architectures to select specific phenotypes based on TAs in a large dataset. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. DALiuGE: A graph execution framework for harnessing the astronomical data deluge

    NASA Astrophysics Data System (ADS)

    Wu, C.; Tobar, R.; Vinsen, K.; Wicenec, A.; Pallot, D.; Lao, B.; Wang, R.; An, T.; Boulton, M.; Cooper, I.; Dodson, R.; Dolensky, M.; Mei, Y.; Wang, F.

    2017-07-01

    The Data Activated Liu Graph Engine - DALiuGE- is an execution framework for processing large astronomical datasets at a scale required by the Square Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex data reduction pipelines consisting of both datasets and algorithmic components and an implementation run-time to execute such pipelines on distributed resources. By mapping the logical view of a pipeline to its physical realisation, DALiuGE separates the concerns of multiple stakeholders, allowing them to collectively optimise large-scale data processing solutions in a coherent manner. The execution in DALiuGE is data-activated, where each individual data item autonomously triggers the processing on itself. Such decentralisation also makes the execution framework very scalable and flexible, supporting pipeline sizes ranging from less than ten tasks running on a laptop to tens of millions of concurrent tasks on the second fastest supercomputer in the world. DALiuGE has been used in production for reducing interferometry datasets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide Spectral Radioheliograph; and is being developed as the execution framework prototype for the Science Data Processor (SDP) consortium of the Square Kilometre Array (SKA) telescope. This paper presents a technical overview of DALiuGE and discusses case studies from the CHILES and MUSER projects that use DALiuGE to execute production pipelines. In a companion paper, we provide in-depth analysis of DALiuGE's scalability to very large numbers of tasks on two supercomputing facilities.

  5. Runtime Verification of C Programs

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2008-01-01

    We present in this paper a framework, RMOR, for monitoring the execution of C programs against state machines, expressed in a textual (nongraphical) format in files separate from the program. The state machine language has been inspired by a graphical state machine language RCAT recently developed at the Jet Propulsion Laboratory, as an alternative to using Linear Temporal Logic (LTL) for requirements capture. Transitions between states are labeled with abstract event names and Boolean expressions over such. The abstract events are connected to code fragments using an aspect-oriented pointcut language similar to ASPECTJ's or ASPECTC's pointcut language. The system is implemented in the C analysis and transformation package CIL, and is programmed in OCAML, the implementation language of CIL. The work is closely related to the notion of stateful aspects within aspect-oriented programming, where pointcut languages are extended with temporal assertions over the execution trace.

  6. A Framework for Addressing Implementation Gap in Global Drowning Prevention Interventions: Experiences from Bangladesh

    PubMed Central

    Alonge, Olakunle; He, Siran; Wadhwaniya, Shirin; Rahman, Fazlur; Rahman, Aminur; Arifeen, Shams El

    2014-01-01

    ABSTRACT Drowning is the commonest cause of injury-related deaths among under-five children worldwide, and 95% of deaths occur in low- and middle-income countries (LMICs) where there are implementation gaps in the drowning prevention interventions. This article reviews common interventions for drowning prevention, introduces a framework for effective implementation of such interventions, and describes the Saving of Lives from Drowning (SoLiD) Project in Bangladesh, which is based on this framework. A review of the systematic reviews on drowning interventions was conducted, and original research articles were pulled and summarized into broad prevention categories. The implementation framework builds upon two existing frameworks and categorizes the implementing process for drowning prevention interventions into four phases: planning, engaging, executing, and evaluating. Eleven key characteristics are mapped in these phases. The framework was applied to drowning prevention projects that have been undertaken in some LMICs to illustrate major challenges to implementation. The implementation process for the SoLiD Project in Bangladesh is used as an example to illustrate the practical utilization of the framework. Drowning interventions, such as pool fencing and covering of water hazards, are effective in high-income countries; however, most of these interventions have not been tested in LMICs. The critical components of the four phases of implementing drowning prevention interventions may include: (i) planning—global funding, political will, scale, sustainability, and capacity building; (ii) engaging—coordination, involvement of appropriate individuals; (iii) executing—focused action, multisectoral actions, quality of execution; and (iv) evaluating—rigorous monitoring and evaluation. Some of the challenges to implementing drowning prevention interventions in LMICs include insufficient funds, lack of technical capacity, and limited coordination among stakeholders and implementers. The SoLiD Project in Bangladesh incorporates some of these lessons and key features of the proposed framework. The framework presented in this paper was a useful tool for implementing drowning prevention interventions in Bangladesh and may be useful for adaptation in drowning and injury prevention programmes of other LMIC settings. PMID:25895188

  7. Age-Related Differences in Multiple Task Monitoring

    PubMed Central

    Todorov, Ivo; Del Missier, Fabio; Mäntylä, Timo

    2014-01-01

    Coordinating multiple tasks with narrow deadlines is particularly challenging for older adults because of age related decline in cognitive control functions. We tested the hypothesis that multiple task performance reflects age- and gender-related differences in executive functioning and spatial ability. Young and older adults completed a multitasking session with four monitoring tasks as well as separate tasks measuring executive functioning and spatial ability. For both age groups, men exceeded women in multitasking, measured as monitoring accuracy. Individual differences in executive functioning and spatial ability were independent predictors of young adults' monitoring accuracy, but only spatial ability was related to sex differences. For older adults, age and executive functioning, but not spatial ability, predicted multitasking performance. These results suggest that executive functions contribute to multiple task performance across the adult life span and that reliance on spatial skills for coordinating deadlines is modulated by age. PMID:25215609

  8. Age-related differences in multiple task monitoring.

    PubMed

    Todorov, Ivo; Del Missier, Fabio; Mäntylä, Timo

    2014-01-01

    Coordinating multiple tasks with narrow deadlines is particularly challenging for older adults because of age related decline in cognitive control functions. We tested the hypothesis that multiple task performance reflects age- and gender-related differences in executive functioning and spatial ability. Young and older adults completed a multitasking session with four monitoring tasks as well as separate tasks measuring executive functioning and spatial ability. For both age groups, men exceeded women in multitasking, measured as monitoring accuracy. Individual differences in executive functioning and spatial ability were independent predictors of young adults' monitoring accuracy, but only spatial ability was related to sex differences. For older adults, age and executive functioning, but not spatial ability, predicted multitasking performance. These results suggest that executive functions contribute to multiple task performance across the adult life span and that reliance on spatial skills for coordinating deadlines is modulated by age.

  9. Metacognitive Monitoring of Executive Control Engagement during Childhood

    ERIC Educational Resources Information Center

    Chevalier, Nicolas; Blaye, Agnès

    2016-01-01

    Emerging executive control supports greater autonomy and increasingly adaptive behavior during childhood. The present study addressed whether children's greater monitoring of how they engage control drives executive control development. Gaze position was recorded while twenty-five 6-year-olds and twenty-eight 10-year-olds performed a self-paced…

  10. Framework for End-User Programming of Cross-Smart Space Applications

    PubMed Central

    Palviainen, Marko; Kuusijärvi, Jarkko; Ovaska, Eila

    2012-01-01

    Cross-smart space applications are specific types of software services that enable users to share information, monitor the physical and logical surroundings and control it in a way that is meaningful for the user's situation. For developing cross-smart space applications, this paper makes two main contributions: it introduces (i) a component design and scripting method for end-user programming of cross-smart space applications and (ii) a backend framework of components that interwork to support the brunt of the RDFScript translation, and the use and execution of ontology models. Before end-user programming activities, the software professionals must develop easy-to-apply Driver components for the APIs of existing software systems. Thereafter, end-users are able to create applications from the commands of the Driver components with the help of the provided toolset. The paper also introduces the reference implementation of the framework, tools for the Driver component development and end-user programming of cross-smart space applications and the first evaluation results on their application. PMID:23202169

  11. Development Context Driven Change Awareness and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Sarma, Anita; Branchaud, Josh; Dwyer, Matthew B.; Person, Suzette; Rungta, Neha

    2014-01-01

    Recent work on workspace monitoring allows conflict prediction early in the development process, however, these approaches mostly use syntactic differencing techniques to compare different program versions. In contrast, traditional change-impact analysis techniques analyze related versions of the program only after the code has been checked into the master repository. We propose a novel approach, De- CAF (Development Context Analysis Framework), that leverages the development context to scope a change impact analysis technique. The goal is to characterize the impact of each developer on other developers in the team. There are various client applications such as task prioritization, early conflict detection, and providing advice on testing that can benefit from such a characterization. The DeCAF framework leverages information from the development context to bound the iDiSE change impact analysis technique to analyze only the parts of the code base that are of interest. Bounding the analysis can enable DeCAF to efficiently compute the impact of changes using a combination of program dependence and symbolic execution based approaches.

  12. Development Context Driven Change Awareness and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Sarma, Anita; Branchaud, Josh; Dwyer, Matthew B.; Person, Suzette; Rungta, Neha; Wang, Yurong; Elbaum, Sebastian

    2014-01-01

    Recent work on workspace monitoring allows conflict prediction early in the development process, however, these approaches mostly use syntactic differencing techniques to compare different program versions. In contrast, traditional change-impact analysis techniques analyze related versions of the program only after the code has been checked into the master repository. We propose a novel approach, DeCAF (Development Context Analysis Framework), that leverages the development context to scope a change impact analysis technique. The goal is to characterize the impact of each developer on other developers in the team. There are various client applications such as task prioritization, early conflict detection, and providing advice on testing that can benefit from such a characterization. The DeCAF framework leverages information from the development context to bound the iDiSE change impact analysis technique to analyze only the parts of the code base that are of interest. Bounding the analysis can enable DeCAF to efficiently compute the impact of changes using a combination of program dependence and symbolic execution based approaches.

  13. A Distributed Computing Framework for Real-Time Detection of Stress and of Its Propagation in a Team.

    PubMed

    Pandey, Parul; Lee, Eun Kyung; Pompili, Dario

    2016-11-01

    Stress is one of the key factor that impacts the quality of our daily life: From the productivity and efficiency in the production processes to the ability of (civilian and military) individuals in making rational decisions. Also, stress can propagate from one individual to other working in a close proximity or toward a common goal, e.g., in a military operation or workforce. Real-time assessment of the stress of individuals alone is, however, not sufficient, as understanding its source and direction in which it propagates in a group of people is equally-if not more-important. A continuous near real-time in situ personal stress monitoring system to quantify level of stress of individuals and its direction of propagation in a team is envisioned. However, stress monitoring of an individual via his/her mobile device may not always be possible for extended periods of time due to limited battery capacity of these devices. To overcome this challenge a novel distributed mobile computing framework is proposed to organize the resources in the vicinity and form a mobile device cloud that enables offloading of computation tasks in stress detection algorithm from resource constrained devices (low residual battery, limited CPU cycles) to resource rich devices. Our framework also supports computing parallelization and workflows, defining how the data and tasks divided/assigned among the entities of the framework are designed. The direction of propagation and magnitude of influence of stress in a group of individuals are studied by applying real-time, in situ analysis of Granger Causality. Tangible benefits (in terms of energy expenditure and execution time) of the proposed framework in comparison to a centralized framework are presented via thorough simulations and real experiments.

  14. RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.

    PubMed

    Varghese, Blesson; Patel, Ishan; Barker, Adam

    2015-01-01

    Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.

  15. UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.

    PubMed

    Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L

    2012-03-01

    The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.

  16. Dual compile strategy for parallel heterogeneous execution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Tyler Barratt; Perry, James Thomas

    2012-06-01

    The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work atmore » the same time.« less

  17. Adaptive runtime for a multiprocessing API

    DOEpatents

    Antao, Samuel F.; Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.

    2016-11-15

    A computer-implemented method includes selecting a runtime for executing a program. The runtime includes a first combination of feature implementations, where each feature implementation implements a feature of an application programming interface (API). Execution of the program is monitored, and the execution uses the runtime. Monitor data is generated based on the monitoring. A second combination of feature implementations are selected, by a computer processor, where the selection is based at least in part on the monitor data. The runtime is modified by activating the second combination of feature implementations to replace the first combination of feature implementations.

  18. Adaptive runtime for a multiprocessing API

    DOEpatents

    Antao, Samuel F.; Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.

    2016-10-11

    A computer-implemented method includes selecting a runtime for executing a program. The runtime includes a first combination of feature implementations, where each feature implementation implements a feature of an application programming interface (API). Execution of the program is monitored, and the execution uses the runtime. Monitor data is generated based on the monitoring. A second combination of feature implementations are selected, by a computer processor, where the selection is based at least in part on the monitor data. The runtime is modified by activating the second combination of feature implementations to replace the first combination of feature implementations.

  19. Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines

    PubMed Central

    2010-01-01

    Background Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs. The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase. Methods A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA). Results The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows. Conclusions The framework is an effective solution for computerizing clinical guidelines as it allows for quick development, evaluation and human-readable visualization of the Rules and has a good performance. By monitoring the parameters of the patient to automatically detect exceptional situations and problems and by notifying the medical staff of tasks that need to be performed, the computerized sedation guideline improves the execution of the guideline. PMID:20082700

  20. Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines.

    PubMed

    Ongenae, Femke; De Backere, Femke; Steurbaut, Kristof; Colpaert, Kirsten; Kerckhove, Wannes; Decruyenaere, Johan; De Turck, Filip

    2010-01-18

    Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs.The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase. A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA). The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows. The framework is an effective solution for computerizing clinical guidelines as it allows for quick development, evaluation and human-readable visualization of the Rules and has a good performance. By monitoring the parameters of the patient to automatically detect exceptional situations and problems and by notifying the medical staff of tasks that need to be performed, the computerized sedation guideline improves the execution of the guideline.

  1. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in jobmore » queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.« less

  2. Executive Function in Education: From Theory to Practice

    ERIC Educational Resources Information Center

    Meltzer, Lynn, Ed.

    2007-01-01

    This uniquely integrative book brings together leading researchers and practitioners from education, neuroscience, and psychology. It presents a theoretical framework for understanding executive function difficulties together with a range of effective approaches to assessment and instruction. Coverage includes executive function processes in…

  3. Symbolic Constraint Maintenance Grid

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Version 3.1 of Symbolic Constraint Maintenance Grid (SCMG) is a software system that provides a general conceptual framework for utilizing pre-existing programming techniques to perform symbolic transformations of data. SCMG also provides a language (and an associated communication method and protocol) for representing constraints on the original non-symbolic data. SCMG provides a facility for exchanging information between numeric and symbolic components without knowing the details of the components themselves. In essence, it integrates symbolic software tools (for diagnosis, prognosis, and planning) with non-artificial-intelligence software. SCMG executes a process of symbolic summarization and monitoring of continuous time series data that are being abstractly represented as symbolic templates of information exchange. This summarization process enables such symbolic- reasoning computing systems as artificial- intelligence planning systems to evaluate the significance and effects of channels of data more efficiently than would otherwise be possible. As a result of the increased efficiency in representation, reasoning software can monitor more channels and is thus able to perform monitoring and control functions more effectively.

  4. 17 CFR 37.400 - Core Principle 4-Monitoring of trading and trade processing.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... trading and trade processing. 37.400 Section 37.400 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION SWAP EXECUTION FACILITIES Monitoring of Trading and Trade Processing § 37.400 Core Principle 4—Monitoring of trading and trade processing. The swap execution facility shall: (a) Establish and...

  5. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  6. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  7. Using SCOR as a Supply Chain Management Framework for Government Agency Contract Requirements

    NASA Technical Reports Server (NTRS)

    Paxton, Joe

    2010-01-01

    Enterprise Supply Chain Management consists of: Specifying suppliers to support inter-program and inter-agency efforts. Optimizing inventory levels and locations throughout the supply chain. Executing corrective actions to improve quality and lead time issues throughout the supply chain. Processing reported data to calculate and make visible supply chain performance (provide information for decisions and actions). Ensuring the right hardware and information is provided at the right time and in the right place. Monitoring the industrial base while developing, producing, operating and retiring a system. Seeing performance deep in the supply chain that could indicate issues affecting system availability and readiness.

  8. blend4php: a PHP API for galaxy

    PubMed Central

    Wytko, Connor; Soto, Brian; Ficklin, Stephen P.

    2017-01-01

    Galaxy is a popular framework for execution of complex analytical pipelines typically for large data sets, and is a commonly used for (but not limited to) genomic, genetic and related biological analysis. It provides a web front-end and integrates with high performance computing resources. Here we report the development of the blend4php library that wraps Galaxy’s RESTful API into a PHP-based library. PHP-based web applications can use blend4php to automate execution, monitoring and management of a remote Galaxy server, including its users, workflows, jobs and more. The blend4php library was specifically developed for the integration of Galaxy with Tripal, the open-source toolkit for the creation of online genomic and genetic web sites. However, it was designed as an independent library for use by any application, and is freely available under version 3 of the GNU Lesser General Public License (LPGL v3.0) at https://github.com/galaxyproject/blend4php. Database URL: https://github.com/galaxyproject/blend4php PMID:28077564

  9. Crisis management: an extended reference framework for decision makers.

    PubMed

    Carone, Alessandro; Iorio, Luigi Di

    2013-01-01

    The paper discusses a reference framework for capabilities supporting effective crisis management. This framework has been developed by joining experiences in the field and knowledge of organisational models for crisis management, and executives' empowerment, coaching and behavioural analysis. The paper is aimed at offering further insight to executives on critical success factors and means for managing crisis situations by extending the scope of analysis to human behaviour, to emotions and fears and their correlation with decision making. It is further intended to help familiarise them and to facilitate approaching a path towards emotional awareness.

  10. SCaLeM: A Framework for Characterizing and Analyzing Execution Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Manzano Franco, Joseph B.; Krishnamoorthy, Sriram

    2014-10-13

    As scalable parallel systems evolve towards more complex nodes with many-core architectures and larger trans-petascale & upcoming exascale deployments, there is a need to understand, characterize and quantify the underlying execution models being used on such systems. Execution models are a conceptual layer between applications & algorithms and the underlying parallel hardware and systems software on which those applications run. This paper presents the SCaLeM (Synchronization, Concurrency, Locality, Memory) framework for characterizing and execution models. SCaLeM consists of three basic elements: attributes, compositions and mapping of these compositions to abstract parallel systems. The fundamental Synchronization, Concurrency, Locality and Memory attributesmore » are used to characterize each execution model, while the combinations of those attributes in the form of compositions are used to describe the primitive operations of the execution model. The mapping of the execution model’s primitive operations described by compositions, to an underlying abstract parallel system can be evaluated quantitatively to determine its effectiveness. Finally, SCaLeM also enables the representation and analysis of applications in terms of execution models, for the purpose of evaluating the effectiveness of such mapping.« less

  11. Applying an Integrative Framework of Executive Function to Preschoolers With Specific Language Impairment.

    PubMed

    Kapa, Leah L; Plante, Elena; Doubleday, Kevin

    2017-08-16

    The first goal of this research was to compare verbal and nonverbal executive function abilities between preschoolers with and without specific language impairment (SLI). The second goal was to assess the group differences on 4 executive function components in order to determine if the components may be hierarchically related as suggested within a developmental integrative framework of executive function. This study included 26 4- and 5-year-olds diagnosed with SLI and 26 typically developing age- and sex-matched peers. Participants were tested on verbal and nonverbal measures of sustained selective attention, working memory, inhibition, and shifting. The SLI group performed worse compared with typically developing children on both verbal and nonverbal measures of sustained selective attention and working memory, the verbal inhibition task, and the nonverbal shifting task. Comparisons of standardized group differences between executive function measures revealed a linear increase with the following order: working memory, inhibition, shifting, and sustained selective attention. The pattern of results suggests that preschoolers with SLI have deficits in executive functioning compared with typical peers, and deficits are not limited to verbal tasks. A significant linear relationship between group differences across executive function components supports the possibility of a hierarchical relationship between executive function skills.

  12. UBioLab: a web-laboratory for ubiquitous in-silico experiments.

    PubMed

    Bartocci, Ezio; Cacciagrano, Diletta; Di Berardini, Maria Rita; Merelli, Emanuela; Vito, Leonardo

    2012-07-09

    The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists –for what concerns their management and visualization– and for bioinformaticians –for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle –and possibly to handle in a transparent and uniform way– aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features –as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques– give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.

  13. Executive Functions Development and Playing Games

    ERIC Educational Resources Information Center

    Petty, Ana Lucia; de Souza, Maria Thereza C. Coelho

    2012-01-01

    The aim of this paper is to discuss executive functions and playing games, considering Piaget's work (1967) and the neuropsychological framework (Barkley, 1997, 2000; Cypel, 2007). Two questions guide the discussion: What are the intersections between playing games and the development of executive functions? Can we stimulate children with learning…

  14. Executive Leadership in School Improvement Networks: A Conceptual Framework and Agenda for Research

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Gumus, Emine

    2011-01-01

    The purpose of this analysis is to improve understanding of executive leadership in school improvement networks: for example, networks supported by comprehensive school reform providers, charter management organizations, and education management organizations. In this analysis, we review the literature on networks and executive leadership. We draw…

  15. A Technical/Strategic Paradigm for Online Executive Education

    ERIC Educational Resources Information Center

    Smith, Marlene A.; Keaveney, Susan M.

    2017-01-01

    This article discusses the development and delivery of online courses for the executive education audience. The goal is to introduce a new framework, the technical/strategic paradigm, that will help educators to identify the pedagogical needs of disparate executive groups and adjust their online course development plans accordingly. We describe…

  16. A Framework for Modeling Workflow Execution by an Interdisciplinary Healthcare Team.

    PubMed

    Kezadri-Hamiaz, Mounira; Rosu, Daniela; Wilk, Szymon; Kuziemsky, Craig; Michalowski, Wojtek; Carrier, Marc

    2015-01-01

    The use of business workflow models in healthcare is limited because of insufficient capture of complexities associated with behavior of interdisciplinary healthcare teams that execute healthcare workflows. In this paper we present a novel framework that builds on the well-founded business workflow model formalism and related infrastructures and introduces a formal semantic layer that describes selected aspects of team dynamics and supports their real-time operationalization.

  17. Effects of two types of intra-team feedback on developing a shared mental model in Command & Control teams.

    PubMed

    Rasker, P C; Post, W M; Schraagen, J M

    2000-08-01

    In two studies, the effect of two types of intra-team feedback on developing a shared mental model in Command & Control teams was investigated. A distinction is made between performance monitoring and team self-correction. Performance monitoring is the ability of team members to monitor each other's task execution and give feedback during task execution. Team self-correction is the process in which team members engage in evaluating their performance and in determining their strategies after task execution. In two experiments the opportunity to engage in performance monitoring, respectively team self-correction, was varied systematically. Both performance monitoring as well as team self-correction appeared beneficial in the improvement of team performance. Teams that had the opportunity to engage in performance monitoring, however, performed better than teams that had the opportunity to engage in team self-correction.

  18. Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Briggs, Jeffery L.

    2008-01-01

    The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.

  19. Applying the results based management framework to the CERCA multi-component project in adolescent sexual and reproductive health: a retrospective analysis.

    PubMed

    Cordova-Pozo, Kathya; Hoopes, Andrea J; Cordova, Freddy; Vega, Bernardo; Segura, Zoyla; Hagens, Arnold

    2018-02-08

    Adolescent sexual and reproductive health (SRH), problems such as unplanned pregnancies are complex and multifactorial, thus requiring multifaceted prevention interventions. Evaluating the impact of such interventions is important to ensure efficiency, effectiveness and accountability for project funders and community members. In this study, we propose Results Based Management (RBM) as a framework for project management, using the Community Embedded Reproductive Health Care for Adolescents (CERCA) as a case study for RBM. The CERCA Project (2010-2014) tested interventions to reduce adolescent pregnancy in three Latin American countries, Bolivia, Ecuador and Nicaragua. Activities were designed to increase adolescent SRH behaviors in four domains: communication with parents, partners and peers; access to SRH information; access to SRH services; and use of contraception. When the project ended, the outcome evaluation showed limited impact with concerns about accuracy of monitoring and attrition of participants. We reviewed and analyzed a series of CERCA documents and related data sources. Key findings from these documents were organized within an RBM framework (planning, monitoring, and impact evaluation) to understand how CERCA methodology and performance might have reaped improved results. Strengths and weaknesses were identified in all three elements of the RBM framework. In Planning, the proposed Theory of Change (ToC) differed from that which was carried out in the intervention package. Each country implemented a different intervention package without articulated assumptions on how the activities of intervention would bring about change. In Monitoring, the project oversight was mainly based on administrative and financial requirements rather than monitoring fidelity and quality of intervention activities. In Impact Evaluation, the original CERCA evaluation assessed intervention effects among adolescents, without identifying success and failure factors related to the outcomes, the nature of the outcomes, or cost-effectiveness of interventions. This analysis showed that multi-country projects are complex, entail risks in execution and require robust project management. RBM can be a useful tool to ensure a systematic approach at different phases within a multi-country setting.

  20. Resource-Aware Mobile-Based Health Monitoring.

    PubMed

    Masud, Mohammad M; Adel Serhani, Mohamed; Navaz, Alramzana Nujum

    2017-03-01

    Monitoring heart diseases often requires frequent measurements of electrocardiogram (ECG) signals at different periods of the day, and at different situations (e.g., traveling, and exercising). This can only be implemented using mobile devices in order to cope with mobility of patients under monitoring, thus supporting continuous monitoring practices. However, these devices are energy-aware, have limited computing resources (e.g., CPU speed and memory), and might lose network connectivity, which makes it very challenging to maintain a continuity of the monitoring episode. In this paper, we propose a mobile monitoring solution to cope with these challenges by compromising on the fly resources availability, battery level, and network intermittence. In order to solve this problem, first we divide the whole process into several subtasks such that each subtask can be executed sequentially either in the server or in the mobile or in parallel in both devices. Then, we developed a mathematical model that considers all the constraints and finds a dynamic programing solution to obtain the best execution path (i.e., which substep should be done where). The solution guarantees an optimum execution time, while considering device battery availability, execution and transmission time, and network availability. We conducted a series of experiments to evaluate our proposed approach using some key monitoring tasks starting from preprocessing to classification and prediction. The results we have obtained proved that our approach gives the best (lowest) running time for any combination of factors including processing speed, input size, and network bandwidth. Compared to several greedy but nonoptimal solutions, the execution time of our approach was at least 10 times faster and consumed 90% less energy.

  1. Applying an Integrative Framework of Executive Function to Preschoolers With Specific Language Impairment

    PubMed Central

    Plante, Elena; Doubleday, Kevin

    2017-01-01

    Purpose The first goal of this research was to compare verbal and nonverbal executive function abilities between preschoolers with and without specific language impairment (SLI). The second goal was to assess the group differences on 4 executive function components in order to determine if the components may be hierarchically related as suggested within a developmental integrative framework of executive function. Method This study included 26 4- and 5-year-olds diagnosed with SLI and 26 typically developing age- and sex-matched peers. Participants were tested on verbal and nonverbal measures of sustained selective attention, working memory, inhibition, and shifting. Results The SLI group performed worse compared with typically developing children on both verbal and nonverbal measures of sustained selective attention and working memory, the verbal inhibition task, and the nonverbal shifting task. Comparisons of standardized group differences between executive function measures revealed a linear increase with the following order: working memory, inhibition, shifting, and sustained selective attention. Conclusion The pattern of results suggests that preschoolers with SLI have deficits in executive functioning compared with typical peers, and deficits are not limited to verbal tasks. A significant linear relationship between group differences across executive function components supports the possibility of a hierarchical relationship between executive function skills. PMID:28724132

  2. A Rewriting-Based Approach to Trace Analysis

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present a rewriting-based algorithm for efficiently evaluating future time Linear Temporal Logic (LTL) formulae on finite execution traces online. While the standard models of LTL are infinite traces, finite traces appear naturally when testing and/or monitoring red applications that only run for limited time periods. The presented algorithm is implemented in the Maude executable specification language and essentially consists of a set of equations establishing an executable semantics of LTL using a simple formula transforming approach. The algorithm is further improved to build automata on-the-fly from formulae, using memoization. The result is a very efficient and small Maude program that can be used to monitor program executions. We furthermore present an alternative algorithm for synthesizing probably minimal observer finite state machines (or automata) from LTL formulae, which can be used to analyze execution traces without the need for a rewriting system, and can hence be used by observers written in conventional programming languages. The presented work is part of an ambitious runtime verification and monitoring project at NASA Ames, called PATHEXPLORER, and demonstrates that rewriting can be a tractable and attractive means for experimenting and implementing program monitoring logics.

  3. Automatic Earth observation data service based on reusable geo-processing workflow

    NASA Astrophysics Data System (ADS)

    Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min

    2008-12-01

    A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.

  4. Extending the mirror neuron system model, II: what did I just do? A new role for mirror neurons.

    PubMed

    Bonaiuto, James; Arbib, Michael A

    2010-04-01

    A mirror system is active both when an animal executes a class of actions (self-actions) and when it sees another execute an action of that class. Much attention has been given to the possible roles of mirror systems in responding to the actions of others but there has been little attention paid to their role in self-actions. In the companion article (Bonaiuto et al. Biol Cybern 96:9-38, 2007) we presented MNS2, an extension of the Mirror Neuron System model of the monkey mirror system trained to recognize the external appearance of its own actions as a basis for recognizing the actions of other animals when they perform similar actions. Here we further extend the study of the mirror system by introducing the novel hypotheses that a mirror system may additionally help in monitoring the success of a self-action and may also be activated by recognition of one's own apparent actions as well as efference copy from one's intended actions. The framework for this computational demonstration is a model of action sequencing, called augmented competitive queuing, in which action choice is based on the desirability of executable actions. We show how this "what did I just do?" function of mirror neurons can contribute to the learning of both executability and desirability which in certain cases supports rapid reorganization of motor programs in the face of disruptions.

  5. Applying the Model of the Interrelationship of Leadership Environments and Outcomes for Nurse Executives: a community hospital's exemplar in developing staff nurse engagement through documentation improvement initiatives.

    PubMed

    Adams, Jeffrey M; Denham, Debra; Neumeister, Irene Ramirez

    2010-01-01

    The Model of the Interrelationship of Leadership, Environments & Outcomes for Nurse Executives (MILE ONE) was developed on the basis of existing literature related to identifying strategies for simultaneous improvement of leadership, professional practice/work environments (PPWE), and outcomes. Through existing evidence, the MILE ONE identifies the continuous and dependent interrelationship of 3 distinct concept areas: (1) nurse executives influence PPWE, (2) PPWE influence patient and organizational outcomes, and (3) patient and organizational outcomes influence nurse executives. This article highlights the application of the MILE ONE framework to a community district hospital's clinical documentation performance improvement projects. Results suggest that the MILE ONE is a valid and useful framework yielding both anticipated and unexpected enhancements to leaders, environments, and outcomes.

  6. Deconstructing the Associations Between Executive Functioning, Problematic Alcohol Use, and Intimate Partner Aggression: A Dyadic Analysis

    PubMed Central

    Parrott, Dominic J.; Swartout, Kevin M.; Eckhardt, Christopher I.; Subramani, Olivia S.

    2016-01-01

    Introduction and Aims Problematic drinking and executive functioning deficits are two known risk factors for intimate partner aggression (IPA). However, executive functioning is a multifaceted construct, and it is not clear whether deficits in specific components of executive functioning are differentially associated with IPA perpetration generally and within the context of problematic alcohol use. To address this question, the present study investigated the effects of problematic drinking and components of executive functioning on physical IPA perpetration within a dyadic framework. Design and Methods Participants were 582 heavy drinking couples (total N = 1,164) with a recent history of psychological and/or physical IPA recruited from two metropolitan cities in the United States. Multilevel models were used to examine effects within an Actor-Partner Interdependence framework. Results The highest levels of physical IPA were observed among Actors who reported everyday consequences of executive functioning deficits related to emotional dysregulation whose partners were problematic drinkers. However, the association between executive functioning deficits related to emotional dysregulation and IPA was stronger toward partners who were non-problematic drinkers relative to partners who were problematic drinkers drinkers. No such effect was found for executive functioning deficits related to behavioral regulation. Discussion and Conclusions Results provide insight into how problematic drinking and specific executive functioning deficits interact dyadically in relation to physical IPA perpetration. PMID:28116760

  7. Generalized Symbolic Execution for Model Checking and Testing

    NASA Technical Reports Server (NTRS)

    Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)

    2003-01-01

    Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.

  8. CAreDroid: Adaptation Framework for Android Context-Aware Applications

    PubMed Central

    Elmalaki, Salma; Wanner, Lucas; Srivastava, Mani

    2015-01-01

    Context-awareness is the ability of software systems to sense and adapt to their physical environment. Many contemporary mobile applications adapt to changing locations, connectivity states, available computational and energy resources, and proximity to other users and devices. Nevertheless, there is little systematic support for context-awareness in contemporary mobile operating systems. Because of this, application developers must build their own context-awareness adaptation engines, dealing directly with sensors and polluting application code with complex adaptation decisions. In this paper, we introduce CAreDroid, which is a framework that is designed to decouple the application logic from the complex adaptation decisions in Android context-aware applications. In this framework, developers are required— only—to focus on the application logic by providing a list of methods that are sensitive to certain contexts along with the permissible operating ranges under those contexts. At run time, CAreDroid monitors the context of the physical environment and intercepts calls to sensitive methods, activating only the blocks of code that best fit the current physical context. CAreDroid is implemented as part of the Android runtime system. By pushing context monitoring and adaptation into the runtime system, CAreDroid eases the development of context-aware applications and increases their efficiency. In particular, case study applications implemented using CAre-Droid are shown to have: (1) at least half lines of code fewer and (2) at least 10× more efficient in execution time compared to equivalent context-aware applications that use only standard Android APIs. PMID:26834512

  9. CAreDroid: Adaptation Framework for Android Context-Aware Applications.

    PubMed

    Elmalaki, Salma; Wanner, Lucas; Srivastava, Mani

    2015-09-01

    Context-awareness is the ability of software systems to sense and adapt to their physical environment. Many contemporary mobile applications adapt to changing locations, connectivity states, available computational and energy resources, and proximity to other users and devices. Nevertheless, there is little systematic support for context-awareness in contemporary mobile operating systems. Because of this, application developers must build their own context-awareness adaptation engines, dealing directly with sensors and polluting application code with complex adaptation decisions. In this paper, we introduce CAreDroid, which is a framework that is designed to decouple the application logic from the complex adaptation decisions in Android context-aware applications. In this framework, developers are required- only-to focus on the application logic by providing a list of methods that are sensitive to certain contexts along with the permissible operating ranges under those contexts. At run time, CAreDroid monitors the context of the physical environment and intercepts calls to sensitive methods, activating only the blocks of code that best fit the current physical context. CAreDroid is implemented as part of the Android runtime system. By pushing context monitoring and adaptation into the runtime system, CAreDroid eases the development of context-aware applications and increases their efficiency. In particular, case study applications implemented using CAre-Droid are shown to have: (1) at least half lines of code fewer and (2) at least 10× more efficient in execution time compared to equivalent context-aware applications that use only standard Android APIs.

  10. Development of the safety control framework for shield tunneling in close proximity to the operational subway tunnels: case studies in mainland China.

    PubMed

    Li, Xinggao; Yuan, Dajun

    2016-01-01

    China's largest cities like Beijing and Shanghai have seen a sharp increase in subway network development as a result of the rapid urbanization in the last decade. The cities are still expanding their subway networks now, and many shield tunnels are being or will be constructed in close proximity to the existing operational subway tunnels. The execution plans for the new nearby shield tunnel construction calls for the development of a safety control framework-a set of control standards and best practices to help organizations manage the risks involved. Typical case studies and relevant key technical parameters are presented with a view to presenting the resulting safety control framework. The framework, created through collaboration among the relevant parties, addresses and manages the risks in a systematic way based on actual conditions of each tunnel crossing construction. The framework consists of six parts: (1) inspecting the operational subway tunnels; (2) deciding allowed movements of the existing tunnels and tracks; (3) simulating effects of the shield tunneling on the existing tunnels; (4) doing preparation work; (5) monitoring design and information management; and (6) measures and activation mechanism of the countermeasures. The six components are explained and demonstrated in detail. In the end, discussions made involve construction and post-construction settlement of the operational tunnel, application of the remedial grouting to rectify excessive settlements of the operational tunnel, and use of the innovative tool of the optical fiber measurement for tunnel movement monitoring. It is concluded that the construction movement of the tunnel can be controlled within 15 mm when the shield machine is <7 m in excavation diameter. The post-construction settlement of the tunnel buried in the very soft ground is much greater than its construction settlement, and last several years until reaching a final stable state. Two cases are outlined to demonstrate the feasibility of using the remedial grouting to reduce the long-term settlement of the operational tunnels. The more and more segmental tunnels being constructed, there is an increasing need of the optical fiber measurement for tunnel movement monitoring in the near future.

  11. Choking under monitoring pressure: being watched by the experimenter reduces executive attention.

    PubMed

    Belletier, Clément; Davranche, Karen; Tellier, Idriss S; Dumas, Florence; Vidal, Franck; Hasbroucq, Thierry; Huguet, Pascal

    2015-10-01

    Performing more poorly given one's skill level ("choking") is likely in situations that offer an incentive if a certain outcome is achieved (outcome pressure) or when one is being watched by others-especially when one's performance is being evaluated (monitoring pressure). According to the choking literature, outcome pressure is associated with reduced executive control of attention, whereas monitoring pressure is associated with increased, yet counterproductive, attention to skill processes. Here, we show the first evidence that monitoring pressure-being watched by the experimenter-may lead individuals with higher working memory to choke on a classic measure of executive control-just the task effect thought to result from outcome pressure. Not only does this finding help refine our understanding of the processes underlying choking under monitoring pressure, but it also leads to a new look at classic audience effects, with an important implication for experimental psychology.

  12. SERENITY in e-Business and Smart Item Scenarios

    NASA Astrophysics Data System (ADS)

    Benameur, Azzedine; Khoury, Paul El; Seguran, Magali; Sinha, Smriti Kumar

    SERENITY Artefacts, like Class, Patterns, Implementations and Executable Components for Security & Dependability (S&D) in addition to Serenity Runtime Framework (SRF) are discussed in previous chapters. How to integrate these artefacts with applications in Serenity approach is discussed here with two scenarios. The e-Business scenario is a standard loan origination process in a bank. The Smart Item scenario is an Ambient intelligence case study where we take advantage of Smart Items to provide an electronic healthcare infrastructure for remote healthcare assistance. In both cases, we detail how the prototype implementations of the scenarios select proper executable components through Serenity Runtime Framework and then demonstrate how these executable components of the S&D Patterns are deployed.

  13. Applying an Integrative Framework of Executive Function to Preschoolers with Specific Language Impairment

    ERIC Educational Resources Information Center

    Kapa, Leah L.; Plante, Elena; Doubleday, Kevin

    2017-01-01

    Purpose: The first goal of this research was to compare verbal and nonverbal executive function abilities between preschoolers with and without specific language impairment (SLI). The second goal was to assess the group differences on 4 executive function components in order to determine if the components may be hierarchically related as suggested…

  14. Pipelines to Leadership: Aspirations of Executive-Level Community College Leaders to Ascend to the Presidency

    ERIC Educational Resources Information Center

    Waggoner, Reneau

    2016-01-01

    One of the challenges facing community colleges in the United States is the looming retirements of executive/senior-level leadership, particularly the president, on a wide scale. This study explored the career aspirations of executive-level leaders within the community college using Social Cognitive Career Theory as the conceptual framework.…

  15. Age-related differences in strategic monitoring during arithmetic problem solving.

    PubMed

    Geurten, Marie; Lemaire, Patrick

    2017-10-01

    We examined the role of metacognitive monitoring in strategic behavior during arithmetic problem solving, a process that is expected to shed light on age-related differences in strategy selection. Young and older adults accomplished better strategy-judgment, better strategy-selection, and strategy-execution tasks. Data showed that participants made better strategy judgments when problems were problems with homogeneous unit digits (i.e., problems with both unit digits smaller or larger than 5; 31×62) relative to problems with heterogeneous unit digits (i.e., problems with one unit digit smaller or larger than 5; 31×67) and when the better strategy was cued on rounding-up problems (e.g., 68×23) compared to rounding-down problems (e.g., 36×53). Results also indicated higher rates of better strategy judgment in young than in older adults. These aging effects differed across problem types. Older adults made more accurate judgments on rounding-up problems than on rounding-down problems when the cued strategy was rounding-up, while young adults did not show such problem-related differences. Moreover, strategy selection correlated with strategy judgment, and even more so in older adults than in young adults. To discuss the implications of these findings, we propose a theoretical framework of how strategy judgments occur in young and older adults and discuss how this framework enables to understand relationships between metacognitive monitoring and strategic behaviors when participants solve arithmetic problems. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints.

    PubMed

    Sundharam, Sakthivel Manikandan; Navet, Nicolas; Altmeyer, Sebastian; Havet, Lionel

    2018-02-20

    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.

  17. Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    DOE PAGES

    Kim, Hyunjoo; el-Khamra, Yaakoub; Rodero, Ivan; ...

    2011-01-01

    In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints.more » The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.« less

  18. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints

    PubMed Central

    Navet, Nicolas; Havet, Lionel

    2018-01-01

    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system. PMID:29461489

  19. H.U.B city steps: methods and early findings from a community-based participatory research trial to reduce blood pressure among african americans

    PubMed Central

    2011-01-01

    Background Community-based participatory research (CBPR) has been recognized as an important approach to develop and execute health interventions among marginalized populations, and a key strategy to translate research into practice to help reduce health disparities. Despite growing interest in the CBPR approach, CBPR initiatives rarely use experimental or other rigorous research designs to evaluate health outcomes. This behavioral study describes the conceptual frameworks, methods, and early findings related to the reach, adoption, implementation, and effectiveness on primary blood pressure outcomes. Methods The CBPR, social support, and motivational interviewing frameworks are applied to test treatment effects of a two-phased CBPR walking intervention, including a 6-month active intervention quasi experimental phase and 12-month maintenance randomized controlled trial phase to test dose effects of motivational interviewing. A community advisory board helped develop and execute the culturally-appropriate intervention components which included social support walking groups led by peer coaches, pedometer diary self-monitoring, monthly diet and physical activity education sessions, and individualized motivational interviewing sessions. Although the study is on-going, three month data is available and reported. Analyses include descriptive statistics and paired t tests. Results Of 269 enrolled participants, most were African American (94%) females (85%) with a mean age of 43.8 (SD = 12.1) years. Across the 3 months, 90% of all possible pedometer diaries were submitted. Attendance at the monthly education sessions was approximately 33%. At the 3-month follow-up 227 (84%) participants were retained. From baseline to 3-months, systolic BP [126.0 (SD = 19.1) to 120.3 (SD = 17.9) mmHg; p < 0.001] and diastolic BP [83. 2 (SD = 12.3) to 80.2 (SD = 11.6) mmHg; p < 0.001] were significantly reduced. Conclusions This CBPR study highlights implementation factors and signifies the community's active participation in the development and execution of this study. Reach and representativeness of enrolled participants are discussed. Adherence to pedometer diary self-monitoring was better than education session participation. Significant decreases in the primary blood pressure outcomes demonstrate early effectiveness. Importantly, future analyses will evaluate long-term effectiveness of this CBPR behavioral intervention on health outcomes, and help inform the translational capabilities of CBPR efforts. PMID:21663652

  20. BioASF: a framework for automatically generating executable pathway models specified in BioPAX.

    PubMed

    Haydarlou, Reza; Jacobsen, Annika; Bonzanni, Nicola; Feenstra, K Anton; Abeln, Sanne; Heringa, Jaap

    2016-06-15

    Biological pathways play a key role in most cellular functions. To better understand these functions, diverse computational and cell biology researchers use biological pathway data for various analysis and modeling purposes. For specifying these biological pathways, a community of researchers has defined BioPAX and provided various tools for creating, validating and visualizing BioPAX models. However, a generic software framework for simulating BioPAX models is missing. Here, we attempt to fill this gap by introducing a generic simulation framework for BioPAX. The framework explicitly separates the execution model from the model structure as provided by BioPAX, with the advantage that the modelling process becomes more reproducible and intrinsically more modular; this ensures natural biological constraints are satisfied upon execution. The framework is based on the principles of discrete event systems and multi-agent systems, and is capable of automatically generating a hierarchical multi-agent system for a given BioPAX model. To demonstrate the applicability of the framework, we simulated two types of biological network models: a gene regulatory network modeling the haematopoietic stem cell regulators and a signal transduction network modeling the Wnt/β-catenin signaling pathway. We observed that the results of the simulations performed using our framework were entirely consistent with the simulation results reported by the researchers who developed the original models in a proprietary language. The framework, implemented in Java, is open source and its source code, documentation and tutorial are available at http://www.ibi.vu.nl/programs/BioASF CONTACT: j.heringa@vu.nl. © The Author 2016. Published by Oxford University Press.

  1. Data Automata in Scala

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2014-01-01

    The field of runtime verification has during the last decade seen a multitude of systems for monitoring event sequences (traces) emitted by a running system. The objective is to ensure correctness of a system by checking its execution traces against formal specifications representing requirements. A special challenge is data parameterized events, where monitors have to keep track of the combination of control states as well as data constraints, relating events and the data they carry across time points. This poses a challenge wrt. efficiency of monitors, as well as expressiveness of logics. Data automata is a form of automata where states are parameterized with data, supporting monitoring of data parameterized events. We describe the full details of a very simple API in the Scala programming language, an internal DSL (Domain-Specific Language), implementing data automata. The small implementation suggests a design pattern. Data automata allow transition conditions to refer to other states than the source state, and allow target states of transitions to be inlined, offering a temporal logic flavored notation. An embedding of a logic in a high-level language like Scala in addition allows monitors to be programmed using all of Scala's language constructs, offering the full flexibility of a programming language. The framework is demonstrated on an XML processing scenario previously addressed in related work.

  2. Contemporary nurse executive practice: one framework, one dozen cautions.

    PubMed

    Fralic, Maryann F

    2010-03-01

    How does today's nurse executive function effectively within an incredibly complex health care environment? Does it require different skills, new competencies, new behaviors? Can nurse executives, irrespective of setting, who have always been successful in the past, move forward with the same strategic and operational behaviors? Is there "new work" associated with a new context for executive practice? To answer these questions, this article considers key contemporary issues. Copyright 2010 Elsevier Inc. All rights reserved.

  3. From intentions to actions: Neural oscillations encode motor processes through phase, amplitude and phase-amplitude coupling.

    PubMed

    Combrisson, Etienne; Perrone-Bertolotti, Marcela; Soto, Juan Lp; Alamian, Golnoush; Kahane, Philippe; Lachaux, Jean-Philippe; Guillot, Aymeric; Jerbi, Karim

    2017-02-15

    Goal-directed motor behavior is associated with changes in patterns of rhythmic neuronal activity across widely distributed brain areas. In particular, movement initiation and execution are mediated by patterns of synchronization and desynchronization that occur concurrently across distinct frequency bands and across multiple motor cortical areas. To date, motor-related local oscillatory modulations have been predominantly examined by quantifying increases or suppressions in spectral power. However, beyond signal power, spectral properties such as phase and phase-amplitude coupling (PAC) have also been shown to carry information with regards to the oscillatory dynamics underlying motor processes. Yet, the distinct functional roles of phase, amplitude and PAC across the planning and execution of goal-directed motor behavior remain largely elusive. Here, we address this question with unprecedented resolution thanks to multi-site intracerebral EEG recordings in human subjects while they performed a delayed motor task. To compare the roles of phase, amplitude and PAC, we monitored intracranial brain signals from 748 sites across six medically intractable epilepsy patients at movement execution, and during the delay period where motor intention is present but execution is withheld. In particular, we used a machine-learning framework to identify the key contributions of various neuronal responses. We found a high degree of overlap between brain network patterns observed during planning and those present during execution. Prominent amplitude increases in the delta (2-4Hz) and high gamma (60-200Hz) bands were observed during both planning and execution. In contrast, motor alpha (8-13Hz) and beta (13-30Hz) power were suppressed during execution, but enhanced during the delay period. Interestingly, single-trial classification revealed that low-frequency phase information, rather than spectral power change, was the most discriminant feature in dissociating action from intention. Additionally, despite providing weaker decoding, PAC features led to statistically significant classification of motor states, particularly in anterior cingulate cortex and premotor brain areas. These results advance our understanding of the distinct and partly overlapping involvement of phase, amplitude and the coupling between them, in the neuronal mechanisms underlying motor intentions and executions. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. 17 CFR 151.11 - Designated contract market and swap execution facility position limits and accountability rules.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Designated contract market and swap execution facility position limits and accountability rules. (a) Spot... rules and procedures for monitoring and enforcing spot-month position limits set at levels no greater... monitoring and enforcing spot-month position limits set at levels no greater than 25 percent of estimated...

  5. 17 CFR 151.11 - Designated contract market and swap execution facility position limits and accountability rules.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Designated contract market and swap execution facility position limits and accountability rules. (a) Spot... rules and procedures for monitoring and enforcing spot-month position limits set at levels no greater... monitoring and enforcing spot-month position limits set at levels no greater than 25 percent of estimated...

  6. The Source of Execution-Related Dual-Task Interference: Motor Bottleneck or Response Monitoring?

    ERIC Educational Resources Information Center

    Bratzke, Daniel; Rolke, Bettina; Ulrich, Rolf

    2009-01-01

    The present study assessed the underlying mechanism of execution-related dual-task interference in the psychological refractory period (PRP) paradigm. The motor bottleneck hypothesis attributes this interference to a processing limitation at the motor level. By contrast, the response monitoring hypothesis attributes it to a bottleneck process that…

  7. (abstract) An Ada Language Modular Telerobot Task Execution System

    NASA Technical Reports Server (NTRS)

    Backes, Paul; Long, Mark; Steele, Robert

    1993-01-01

    A telerobotic task execution system is described which has been developed for space flight applications. The Modular Telerobot Task Execution System (MOTES) provides the remote site task execution capability in a local-remote telerobotic system. The system provides supervised autonomous control, shared control, and teleoperation for a redundant manipulator. The system is capable of nominal task execution as well as monitoring and reflex motion.

  8. Deconstructing the associations between executive functioning, problematic alcohol use and intimate partner aggression: A dyadic analysis.

    PubMed

    Parrott, Dominic J; Swartout, Kevin M; Eckhardt, Christopher I; Subramani, Olivia S

    2017-01-01

    Problematic drinking and executive functioning deficits are two known risk factors for intimate partner aggression (IPA). However, executive functioning is a multifaceted construct, and it is not clear whether deficits in specific components of executive functioning are differentially associated with IPA perpetration generally and within the context of problematic alcohol use. To address this question, the present study investigated the effects of problematic drinking and components of executive functioning on physical IPA perpetration within a dyadic framework. Participants were 582 heavy drinking couples (total n = 1164) with a recent history of psychological and/or physical IPA recruited from two metropolitan cities in the USA. Multilevel models were used to examine effects within an actor-partner interdependence framework. The highest levels of physical IPA were observed among actors who reported everyday consequences of executive functioning deficits related to emotional dysregulation whose partners were problematic drinkers. However, the association between executive functioning deficits related to emotional dysregulation and IPA was stronger towards partners who were non-problematic drinkers relative to partners who were problematic drinkers. No such effect was found for executive functioning deficits related to behavioural regulation. Results provide insight into how problematic drinking and specific executive functioning deficits interact dyadically in relation to physical IPA perpetration. [Parrott DJ, Swartout KM, Eckhardt CI, Subramani OS. Deconstructing the associations between executive functioning, problematic alcohol use and intimate partner aggression: A dyadic analysis. Drug Alcohol Rev 2017;36:88-96]. © 2017 Australasian Professional Society on Alcohol and other Drugs.

  9. 78 FR 16699 - National Maritime Security Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... Executive Order \\1\\ to strengthen the cybersecurity of critical infrastructure by increasing information sharing and by jointly developing and implementing a framework of cybersecurity practices with our...-press-office/2013/02/12/executive-order-improving-critical-infrastructure-cybersecurity . (2...

  10. Verification of Java Programs using Symbolic Execution and Invariant Generation

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina; Visser, Willem

    2004-01-01

    Software verification is recognized as an important and difficult problem. We present a norel framework, based on symbolic execution, for the automated verification of software. The framework uses annotations in the form of method specifications an3 loop invariants. We present a novel iterative technique that uses invariant strengthening and approximation for discovering these loop invariants automatically. The technique handles different types of data (e.g. boolean and numeric constraints, dynamically allocated structures and arrays) and it allows for checking universally quantified formulas. Our framework is built on top of the Java PathFinder model checking toolset and it was used for the verification of several non-trivial Java programs.

  11. 17 CFR 151.11 - Designated contract market and swap execution facility position limits and accountability rules.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    .... (a) Spot-month limits. (1) For all Referenced Contracts executed pursuant to their rules, swap..., establish rules and procedures for monitoring and enforcing spot-month position limits set at levels no... monitoring and enforcing spot-month position limits set at levels no greater than 25 percent of estimated...

  12. The office of strategy management.

    PubMed

    Kaplan, Robert S; Norton, David P

    2005-10-01

    There is a disconnect in most companies between strategy formulation and strategy execution. On average, 95% of a company's employees are unaware of, or do not understand, its strategy. If employees are unaware of the strategy, they surely cannot help the organization implement it effectively. It doesn't have to be like this. For the past 15 years, the authors have studied companies that achieved performance breakthroughs by adopting the Balanced Scorecard and its associated tools to help them better communicate strategy to their employees and to guide and monitor the execution of that strategy. Some companies, of course, have achieved better, longer-lasting improvements than others. The organizations that have managed to sustain their strategic focus have typically established a new corporate-level unit to oversee all activities related to strategy: an office of strategy management (OS M). The OSM, in effect, acts as the CEO's chief of staff. It coordinates an array of tasks: communicating corporate strategy; ensuring that enterprise-level plans are translated into the plans of the various units and departments; executing strategic initiatives to deliver on the grand design; aligning employees' plans for competency development with strategic objectives; and testing and adapting the strategy to stay abreast of the competition. The OSM does not do all the work, but it facilitates the processes so that strategy is executed in an integrated fashion across the enterprise. Although the companies that Kaplan and Norton studied use the Balanced Scorecard as the framework for their strategy management systems, the authors say the lessons of the OSM are applicable even to companies that do not use it.

  13. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  14. Key principles for a national clinical decision support knowledge sharing framework: synthesis of insights from leading subject matter experts.

    PubMed

    Kawamoto, Kensaku; Hongsermeier, Tonya; Wright, Adam; Lewis, Janet; Bell, Douglas S; Middleton, Blackford

    2013-01-01

    To identify key principles for establishing a national clinical decision support (CDS) knowledge sharing framework. As part of an initiative by the US Office of the National Coordinator for Health IT (ONC) to establish a framework for national CDS knowledge sharing, key stakeholders were identified. Stakeholders' viewpoints were obtained through surveys and in-depth interviews, and findings and relevant insights were summarized. Based on these insights, key principles were formulated for establishing a national CDS knowledge sharing framework. Nineteen key stakeholders were recruited, including six executives from electronic health record system vendors, seven executives from knowledge content producers, three executives from healthcare provider organizations, and three additional experts in clinical informatics. Based on these stakeholders' insights, five key principles were identified for effectively sharing CDS knowledge nationally. These principles are (1) prioritize and support the creation and maintenance of a national CDS knowledge sharing framework; (2) facilitate the development of high-value content and tooling, preferably in an open-source manner; (3) accelerate the development or licensing of required, pragmatic standards; (4) acknowledge and address medicolegal liability concerns; and (5) establish a self-sustaining business model. Based on the principles identified, a roadmap for national CDS knowledge sharing was developed through the ONC's Advancing CDS initiative. The study findings may serve as a useful guide for ongoing activities by the ONC and others to establish a national framework for sharing CDS knowledge and improving clinical care.

  15. 78 FR 19277 - National Maritime Security Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-29

    ... Obama signed an Executive Order to strengthen the cybersecurity of critical infrastructure by increasing information sharing and by jointly developing and implementing a framework of cybersecurity practices with our...-press-office/2013/02/12/executive-order-improving-critical-infrastructure-cybersecurity . (2...

  16. Architecture for reactive planning of robot actions

    NASA Astrophysics Data System (ADS)

    Riekki, Jukka P.; Roening, Juha

    1995-01-01

    In this article, a reactive system for planning robot actions is described. The described hierarchical control system architecture consists of planning-executing-monitoring-modelling elements (PEMM elements). A PEMM element is a goal-oriented, combined processing and data element. It includes a planner, an executor, a monitor, a modeler, and a local model. The elements form a tree-like structure. An element receives tasks from its ancestor and sends subtasks to its descendants. The model knowledge is distributed into the local models, which are connected to each other. The elements can be synchronized. The PEMM architecture is strictly hierarchical. It integrated planning, sensing, and modelling into a single framework. A PEMM-based control system is reactive, as it can cope with asynchronous events and operate under time constraints. The control system is intended to be used primarily to control mobile robots and robot manipulators in dynamic and partially unknown environments. It is suitable especially for applications consisting of physically separated devices and computing resources.

  17. Volunteer water monitoring: A guide for state managers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-08-01

    Contents: executive summary; volunteers in water monitoring; planning a volunteer monitoring program; implementing a volunteer monitoring program; providing credible information; costs and funding; and descriptions of five successful programs.

  18. Movement Education Framework (MEF) Made EZ!

    ERIC Educational Resources Information Center

    Weiller-Abels, Karen; Bridges, Jennifer

    2011-01-01

    All physical educators want to provide lessons that foster success. Particularly essential to the movement education framework is not only providing lessons that foster motor success, but also to develop knowledge about movement to help the learner develop skill in executing all different types of movement. The framework and examples provided in…

  19. Conceptual Model-Based Systems Biology: Mapping Knowledge and Discovering Gaps in the mRNA Transcription Cycle

    PubMed Central

    Somekh, Judith; Choder, Mordechai; Dori, Dov

    2012-01-01

    We propose a Conceptual Model-based Systems Biology framework for qualitative modeling, executing, and eliciting knowledge gaps in molecular biology systems. The framework is an adaptation of Object-Process Methodology (OPM), a graphical and textual executable modeling language. OPM enables concurrent representation of the system's structure—the objects that comprise the system, and behavior—how processes transform objects over time. Applying a top-down approach of recursively zooming into processes, we model a case in point—the mRNA transcription cycle. Starting with this high level cell function, we model increasingly detailed processes along with participating objects. Our modeling approach is capable of modeling molecular processes such as complex formation, localization and trafficking, molecular binding, enzymatic stimulation, and environmental intervention. At the lowest level, similar to the Gene Ontology, all biological processes boil down to three basic molecular functions: catalysis, binding/dissociation, and transporting. During modeling and execution of the mRNA transcription model, we discovered knowledge gaps, which we present and classify into various types. We also show how model execution enhances a coherent model construction. Identification and pinpointing knowledge gaps is an important feature of the framework, as it suggests where research should focus and whether conjectures about uncertain mechanisms fit into the already verified model. PMID:23308089

  20. Source Monitoring and Executive Function in 2.5- to 3-Year-Olds

    ERIC Educational Resources Information Center

    Hala, Suzanne; McKay, Lee-Ann; Brown, Alisha M. B.; San Juan, Valerie

    2016-01-01

    Hala, Brown, McKay, and San Juan (2013) found that children as young as 2.5 years of age demonstrated high levels of accuracy when asked to recall whether they or the experimenter had carried out a particular action. In the research reported here, we examined the relation of early-emerging source monitoring to executive function abilities.…

  1. Reconfiguration in Robust Distributed Real-Time Systems Based on Global Checkpoints

    DTIC Science & Technology

    1991-12-01

    achieved by utilizing distributed systems in which a single application program executes on multiple processors, connected to a network. The distributed...single application program executes on multiple proces- sors, connected to a network. The distributed nature of such systems make it possible to ...resident at every node. How - ever, the responsibility for execution of a particular function is assigned to only one node in this framework. This function

  2. 78 FR 55089 - National Maritime Security Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-09

    ... February 12, 2013, President Barack Obama signed an Executive Order \\1\\ to strengthen the cybersecurity of... framework of cybersecurity practices with our industry partners. This is a continuation of a discussion held.../02/12/executive-order-improving-critical-infrastructure-cybersecurity . (2) Presidential Policy...

  3. Executive functions as predictors of visual-motor integration in children with intellectual disability.

    PubMed

    Memisevic, Haris; Sinanovic, Osman

    2013-12-01

    The goal of this study was to assess the relationship between visual-motor integration and executive functions, and in particular, the extent to which executive functions can predict visual-motor integration skills in children with intellectual disability. The sample consisted of 90 children (54 boys, 36 girls; M age = 11.3 yr., SD = 2.7, range 7-15) with intellectual disabilities of various etiologies. The measure of executive functions were 8 subscales of the Behavioral Rating Inventory of Executive Function (BRIEF) consisting of Inhibition, Shifting, Emotional Control, Initiating, Working memory, Planning, Organization of material, and Monitoring. Visual-motor integration was measured with the Acadia test of visual-motor integration (VMI). Regression analysis revealed that BRIEF subscales explained 38% of the variance in VMI scores. Of all the BRIEF subscales, only two were statistically significant predictors of visual-motor integration: Working memory and Monitoring. Possible implications of this finding are further elaborated.

  4. ADAMS executive and operating system

    NASA Technical Reports Server (NTRS)

    Pittman, W. D.

    1981-01-01

    The ADAMS Executive and Operating System, a multitasking environment under which a variety of data reduction, display and utility programs are executed, a system which provides a high level of isolation between programs allowing them to be developed and modified independently, is described. The Airborne Data Analysis/Monitor System (ADAMS) was developed to provide a real time data monitoring and analysis capability onboard Boeing commercial airplanes during flight testing. It inputs sensor data from an airplane performance data by applying transforms to the collected sensor data, and presents this data to test personnel via various display media. Current utilization and future development are addressed.

  5. Towards a Decision Support System for Space Flight Operations

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Hogle, Charles; Ruszkowski, James

    2013-01-01

    The Mission Operations Directorate (MOD) at the Johnson Space Center (JSC) has put in place a Model Based Systems Engineering (MBSE) technological framework for the development and execution of the Flight Production Process (FPP). This framework has provided much added value and return on investment to date. This paper describes a vision for a model based Decision Support System (DSS) for the development and execution of the FPP and its design and development process. The envisioned system extends the existing MBSE methodology and technological framework which is currently in use. The MBSE technological framework currently in place enables the systematic collection and integration of data required for building an FPP model for a diverse set of missions. This framework includes the technology, people and processes required for rapid development of architectural artifacts. It is used to build a feasible FPP model for the first flight of spacecraft and for recurrent flights throughout the life of the program. This model greatly enhances our ability to effectively engage with a new customer. It provides a preliminary work breakdown structure, data flow information and a master schedule based on its existing knowledge base. These artifacts are then refined and iterated upon with the customer for the development of a robust end-to-end, high-level integrated master schedule and its associated dependencies. The vision is to enhance this framework to enable its application for uncertainty management, decision support and optimization of the design and execution of the FPP by the program. Furthermore, this enhanced framework will enable the agile response and redesign of the FPP based on observed system behavior. The discrepancy of the anticipated system behavior and the observed behavior may be due to the processing of tasks internally, or due to external factors such as changes in program requirements or conditions associated with other organizations that are outside of MOD. The paper provides a roadmap for the three increments of this vision. These increments include (1) hardware and software system components and interfaces with the NASA ground system, (2) uncertainty management and (3) re-planning and automated execution. Each of these increments provide value independently; but some may also enable building of a subsequent increment.

  6. AdaFF: Adaptive Failure-Handling Framework for Composite Web Services

    NASA Astrophysics Data System (ADS)

    Kim, Yuna; Lee, Wan Yeon; Kim, Kyong Hoon; Kim, Jong

    In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.

  7. A Program Management Framework for Facilities Managers

    ERIC Educational Resources Information Center

    King, Dan

    2012-01-01

    The challenge faced by senior facility leaders is not how to execute a single project, but rather, how to successfully execute a large program consisting of hundreds of projects. Senior facilities officers at universities, school districts, hospitals, airports, and other organizations with extensive facility inventories, typically manage project…

  8. Effectiveness of Therapeutic Programs for Students with ADHD with Executive Function Deficits

    ERIC Educational Resources Information Center

    Chaimaha, Napalai; Sriphetcharawut, Sarinya; Lersilp, Suchitporn; Chinchai, Supaporn

    2017-01-01

    The purpose of this study was to investigate the effectiveness of therapeutic programs, an executive function training program and a collaborative program, for students with attention-deficit/hyperactivity disorder (ADHD) with executive function deficits (EFDs), especially regarding working memory, planning, and monitoring. The participants were…

  9. Moles: Tool-Assisted Environment Isolation with Closures

    NASA Astrophysics Data System (ADS)

    de Halleux, Jonathan; Tillmann, Nikolai

    Isolating test cases from environment dependencies is often desirable, as it increases test reliability and reduces test execution time. However, code that calls non-virtual methods or consumes sealed classes is often impossible to test in isolation. Moles is a new lightweight framework which addresses this problem. For any .NET method, Moles allows test-code to provide alternative implementations, given as .NET delegates, for which C# provides very concise syntax while capturing local variables in a closure object. Using code instrumentation, the Moles framework will redirect calls to provided delegates instead of the original methods. The Moles framework is designed to work together with the dynamic symbolic execution tool Pex to enable automated test generation. In a case study, testing code programmed against the Microsoft SharePoint Foundation API, we achieved full code coverage while running tests in isolation without an actual SharePoint server. The Moles framework integrates with .NET and Visual Studio.

  10. CMS users data management service integration and first experiences with its NoSQL data storage

    NASA Astrophysics Data System (ADS)

    Riahi, H.; Spiga, D.; Boccali, T.; Ciangottini, D.; Cinquilli, M.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Santocchia, A.

    2014-06-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.

  11. FRIEDA: Flexible Robust Intelligent Elastic Data Management Framework

    DOE PAGES

    Ghoshal, Devarshi; Hendrix, Valerie; Fox, William; ...

    2017-02-01

    Scientific applications are increasingly using cloud resources for their data analysis workflows. However, managing data effectively and efficiently over these cloud resources is challenging due to the myriad storage choices with different performance, cost trade-offs, complex application choices and complexity associated with elasticity, failure rates in these environments. The different data access patterns for data-intensive scientific applications require a more flexible and robust data management solution than the ones currently in existence. FRIEDA is a Flexible Robust Intelligent Elastic Data Management framework that employs a range of data management strategies in cloud environments. FRIEDA can manage storage and data lifecyclemore » of applications in cloud environments. There are four different stages in the data management lifecycle of FRIEDA – (i) storage planning, (ii) provisioning and preparation, (iii) data placement, and (iv) execution. FRIEDA defines a data control plane and an execution plane. The data control plane defines the data partition and distribution strategy, whereas the execution plane manages the execution of the application using a master-worker paradigm. FRIEDA also provides different data management strategies, either to partition the data in real-time, or predetermine the data partitions prior to application execution.« less

  12. FRIEDA: Flexible Robust Intelligent Elastic Data Management Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoshal, Devarshi; Hendrix, Valerie; Fox, William

    Scientific applications are increasingly using cloud resources for their data analysis workflows. However, managing data effectively and efficiently over these cloud resources is challenging due to the myriad storage choices with different performance, cost trade-offs, complex application choices and complexity associated with elasticity, failure rates in these environments. The different data access patterns for data-intensive scientific applications require a more flexible and robust data management solution than the ones currently in existence. FRIEDA is a Flexible Robust Intelligent Elastic Data Management framework that employs a range of data management strategies in cloud environments. FRIEDA can manage storage and data lifecyclemore » of applications in cloud environments. There are four different stages in the data management lifecycle of FRIEDA – (i) storage planning, (ii) provisioning and preparation, (iii) data placement, and (iv) execution. FRIEDA defines a data control plane and an execution plane. The data control plane defines the data partition and distribution strategy, whereas the execution plane manages the execution of the application using a master-worker paradigm. FRIEDA also provides different data management strategies, either to partition the data in real-time, or predetermine the data partitions prior to application execution.« less

  13. Does predictability matter? Effects of cue predictability on neurocognitive mechanisms underlying prospective memory.

    PubMed

    Cona, Giorgia; Arcara, Giorgio; Tarantino, Vincenza; Bisiacchi, Patrizia S

    2015-01-01

    Prospective memory (PM) represents the ability to successfully realize intentions when the appropriate moment or cue occurs. In this study, we used event-related potentials (ERPs) to explore the impact of cue predictability on the cognitive and neural mechanisms supporting PM. Participants performed an ongoing task and, simultaneously, had to remember to execute a pre-specified action when they encountered the PM cues. The occurrence of the PM cues was predictable (being signaled by a warning cue) for some participants and was completely unpredictable for others. In the predictable cue condition, the behavioral and ERP correlates of strategic monitoring were observed mainly in the ongoing trials wherein the PM cue was expected. In the unpredictable cue condition they were instead shown throughout the whole PM block. This pattern of results suggests that, in the predictable cue condition, participants engaged monitoring only when subjected to a context wherein the PM cue was expected, and disengaged monitoring when the PM cue was not expected. Conversely, participants in the unpredictable cue condition distributed their resources for strategic monitoring in more continuous manner. The findings of this study support the most recent views-the "Dynamic Multiprocess Framework" and the "Attention to Delayed Intention" (AtoDI) model-confirming that strategic monitoring is a flexible mechanism that is recruited mainly when a PM cue is expected and that may interact with bottom-up spontaneous processes.

  14. Exploiting Vector and Multicore Parallelsim for Recursive, Data- and Task-Parallel Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Bin; Krishnamoorthy, Sriram; Agrawal, Kunal

    Modern hardware contains parallel execution resources that are well-suited for data-parallelism-vector units-and task parallelism-multicores. However, most work on parallel scheduling focuses on one type of hardware or the other. In this work, we present a scheduling framework that allows for a unified treatment of task- and data-parallelism. Our key insight is an abstraction, task blocks, that uniformly handles data-parallel iterations and task-parallel tasks, allowing them to be scheduled on vector units or executed independently as multicores. Our framework allows us to define schedulers that can dynamically select between executing task- blocks on vector units or multicores. We show that thesemore » schedulers are asymptotically optimal, and deliver the maximum amount of parallelism available in computation trees. To evaluate our schedulers, we develop program transformations that can convert mixed data- and task-parallel pro- grams into task block-based programs. Using a prototype instantiation of our scheduling framework, we show that, on an 8-core system, we can simultaneously exploit vector and multicore parallelism to achieve 14×-108× speedup over sequential baselines.« less

  15. Key principles for a national clinical decision support knowledge sharing framework: synthesis of insights from leading subject matter experts

    PubMed Central

    Hongsermeier, Tonya; Wright, Adam; Lewis, Janet; Bell, Douglas S; Middleton, Blackford

    2013-01-01

    Objective To identify key principles for establishing a national clinical decision support (CDS) knowledge sharing framework. Materials and methods As part of an initiative by the US Office of the National Coordinator for Health IT (ONC) to establish a framework for national CDS knowledge sharing, key stakeholders were identified. Stakeholders' viewpoints were obtained through surveys and in-depth interviews, and findings and relevant insights were summarized. Based on these insights, key principles were formulated for establishing a national CDS knowledge sharing framework. Results Nineteen key stakeholders were recruited, including six executives from electronic health record system vendors, seven executives from knowledge content producers, three executives from healthcare provider organizations, and three additional experts in clinical informatics. Based on these stakeholders' insights, five key principles were identified for effectively sharing CDS knowledge nationally. These principles are (1) prioritize and support the creation and maintenance of a national CDS knowledge sharing framework; (2) facilitate the development of high-value content and tooling, preferably in an open-source manner; (3) accelerate the development or licensing of required, pragmatic standards; (4) acknowledge and address medicolegal liability concerns; and (5) establish a self-sustaining business model. Discussion Based on the principles identified, a roadmap for national CDS knowledge sharing was developed through the ONC's Advancing CDS initiative. Conclusion The study findings may serve as a useful guide for ongoing activities by the ONC and others to establish a national framework for sharing CDS knowledge and improving clinical care. PMID:22865671

  16. Evaluation of the Executive Information Requirements for the Market Research Process.

    ERIC Educational Resources Information Center

    Lanser, Michael A.

    A study examined the marketing research information required by those executives of Lakeshore Technical College (Wisconsin) whose decisions affect the college's direction. Data were gathered from the following sources: literature review; development of a data dictionary framework; analysis of the college's current information system through…

  17. The Political Communication of Strategic Nuclear Policy.

    ERIC Educational Resources Information Center

    Camden, Carl; Martin, Janet

    A study of the different perceptual frameworks of the major parties involved in strategic nuclear policy was conducted by examining the interaction between the Executive Branch, Congress, and the informed public. Public political communication data were gathered from public documents generated by Congress and the Executive branch, and by examining…

  18. THE EPA MULTIMEDIA INTEGRATED MODELING SYSTEM SOFTWARE SUITE

    EPA Science Inventory

    The U.S. EPA is developing a Multimedia Integrated Modeling System (MIMS) framework that will provide a software infrastructure or environment to support constructing, composing, executing, and evaluating complex modeling studies. The framework will include (1) common software ...

  19. Genetic variant for behavioral regulation factor of executive function and its possible brain mechanism in attention deficit hyperactivity disorder.

    PubMed

    Sun, Xiao; Wu, Zhaomin; Cao, Qingjiu; Qian, Ying; Liu, Yong; Yang, Binrang; Chang, Suhua; Yang, Li; Wang, Yufeng

    2018-05-16

    As a childhood-onset psychiatric disorder, attention deficit hyperactivity disorder (ADHD) is complicated by phenotypic and genetic heterogeneity. Lifelong executive function deficits in ADHD are described in many literatures and have been proposed as endophenotypes of ADHD. However, its genetic basis is still elusive. In this study, we performed a genome-wide association study of executive function, rated with Behavioral Rating Inventory of Executive Function (BRIEF), in ADHD children. We identified one significant variant (rs852004, P = 2.51e-08) for the overall score of BRIEF. The association analyses for each component of executive function found this locus was more associated with inhibit and monitor components. Further principle component analysis and confirmatory factor analysis provided an ADHD-specific executive function pattern including inhibit and monitor factors. SNP rs852004 was mainly associated with the Behavioral Regulation factor. Meanwhile, we found the significant locus was associated with ADHD symptom. The Behavioral Regulation factor mediated its effect on ADHD symptom. Functional magnetic resonance imaging (fMRI) analyses further showed evidence that this variant affected the activity of inhibition control related brain regions. It provided new insights for the genetic basis of executive function in ADHD.

  20. Turning great strategy into great performance.

    PubMed

    Mankins, Michael C; Steele, Richard

    2005-01-01

    Despite the enormous time and energy that goes into strategy development, many companies have little to show for their efforts. Indeed, research by the consultancy Marakon Associates suggests that companies on average deliver only 63% of the financial performance their strategies promise. In this article, Michael Mankins and Richard Steele of Marakon present the findings of this research. They draw on their experience with high-performing companies like Barclays, Cisco, Dow Chemical, 3M, and Roche to establish some basic rules for setting and delivering strategy: Keep it simple, make it concrete. Avoid long, drawn-out descriptions of lofty goals and instead stick to clear language describing what your company will and won't do. Debate assumptions, not forecasts. Create cross-functional teams drawn from strategy, marketing, and finance to ensure the assumptions underlying your long-term plans reflect both the real economics of your company's markets and its actual performance relative to competitors. Use a rigorous analytic framework. Ensure that the dialogue between the corporate center and the business units about market trends and assumptions is conducted within a rigorous framework, such as that of "profit pools". Discuss resource deployments early. Create more realistic forecasts and more executable plans by discussing up front the level and timing of critical deployments. Clearly identify priorities. Prioritize tactics so that employees have a clear sense of where to direct their efforts. Continuously monitor performance. Track resource deployment and results against plan, using continuous feedback to reset assumptions and reallocate resources. Reward and develop execution capabilities. Motivate and develop staff. Following these rules strictly can help narrow the strategy-to-performance gap.

  1. Creating personalised clinical pathways by semantic interoperability with electronic health records.

    PubMed

    Wang, Hua-Qiong; Li, Jing-Song; Zhang, Yi-Fan; Suzuki, Muneou; Araki, Kenji

    2013-06-01

    There is a growing realisation that clinical pathways (CPs) are vital for improving the treatment quality of healthcare organisations. However, treatment personalisation is one of the main challenges when implementing CPs, and the inadequate dynamic adaptability restricts the practicality of CPs. The purpose of this study is to improve the practicality of CPs using semantic interoperability between knowledge-based CPs and semantic electronic health records (EHRs). Simple protocol and resource description framework query language is used to gather patient information from semantic EHRs. The gathered patient information is entered into the CP ontology represented by web ontology language. Then, after reasoning over rules described by semantic web rule language in the Jena semantic framework, we adjust the standardised CPs to meet different patients' practical needs. A CP for acute appendicitis is used as an example to illustrate how to achieve CP customisation based on the semantic interoperability between knowledge-based CPs and semantic EHRs. A personalised care plan is generated by comprehensively analysing the patient's personal allergy history and past medical history, which are stored in semantic EHRs. Additionally, by monitoring the patient's clinical information, an exception is recorded and handled during CP execution. According to execution results of the actual example, the solutions we present are shown to be technically feasible. This study contributes towards improving the clinical personalised practicality of standardised CPs. In addition, this study establishes the foundation for future work on the research and development of an independent CP system. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. 78 FR 50030 - Implementation of New Gulf Coast Ecosystem Restoration Science, Observation, Monitoring, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-16

    ... issues, its experience working in the Gulf of Mexico, and its demonstrated ability to transfer research... Executive Oversight Board consisting of senior executives representing each of the NOAA Line Offices, as well as a senior executive from the US Fish and Wildlife Service, to oversee continuing development and...

  3. Evaluating the Theory of Executive Dysfunction in Autism

    ERIC Educational Resources Information Center

    Hill, Elisabeth L.

    2004-01-01

    In this paper studies of executive function in autism spectrum disorder are reviewed. Executive function is an umbrella term for functions such as planning, working memory, impulse control, inhibition, and shifting set, as well as for the initiation and monitoring of action. In this review, the focus will be on planning, inhibition, shifting set,…

  4. Geometry of behavioral spaces: A computational approach to analysis and understanding of agent based models and agent behaviors

    NASA Astrophysics Data System (ADS)

    Cenek, Martin; Dahl, Spencer K.

    2016-11-01

    Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.

  5. Geometry of behavioral spaces: A computational approach to analysis and understanding of agent based models and agent behaviors.

    PubMed

    Cenek, Martin; Dahl, Spencer K

    2016-11-01

    Systems with non-linear dynamics frequently exhibit emergent system behavior, which is important to find and specify rigorously to understand the nature of the modeled phenomena. Through this analysis, it is possible to characterize phenomena such as how systems assemble or dissipate and what behaviors lead to specific final system configurations. Agent Based Modeling (ABM) is one of the modeling techniques used to study the interaction dynamics between a system's agents and its environment. Although the methodology of ABM construction is well understood and practiced, there are no computational, statistically rigorous, comprehensive tools to evaluate an ABM's execution. Often, a human has to observe an ABM's execution in order to analyze how the ABM functions, identify the emergent processes in the agent's behavior, or study a parameter's effect on the system-wide behavior. This paper introduces a new statistically based framework to automatically analyze agents' behavior, identify common system-wide patterns, and record the probability of agents changing their behavior from one pattern of behavior to another. We use network based techniques to analyze the landscape of common behaviors in an ABM's execution. Finally, we test the proposed framework with a series of experiments featuring increasingly emergent behavior. The proposed framework will allow computational comparison of ABM executions, exploration of a model's parameter configuration space, and identification of the behavioral building blocks in a model's dynamics.

  6. Self-reflection and set-shifting mediate awareness in cognitively preserved schizophrenia patients.

    PubMed

    Gilleen, James; David, Anthony; Greenwood, Kathryn

    2016-05-01

    Poor insight in schizophrenia has been linked to poor cognitive functioning, psychological processes such as denial, or more recently with impaired metacognitive capacity. Few studies, however, have investigated the potential co-dependency of multiple factors in determining level of insight, but such a model is necessary in order to account for patients with good cognitive functioning who have very poor awareness. As evidence suggests that set-shifting and cognitive insight (self-reflection (SR) and self-certainty) are strong predictors of awareness we proposed that these factors are key mediators in the relationship between cognition and awareness. We hypothesised that deficits specifically in SR and set-shifting determine level of awareness in the context of good cognition. Thirty schizophrenia patients were stratified by high and low awareness of illness and executive functioning scores. Cognitive insight, cognition, mood and symptom measures were compared between sub-groups. A low insight/high executive functioning (LI-HE) group, a high insight/high executive functioning (HI-HE) group and a low insight/low executive functioning (LI-LE) group were revealed. As anticipated, the LI-HE patients showed significantly lower capacity for SR and set-shifting than the HI-HE patients. This study indicates that good cognitive functioning is necessary but not sufficient for good awareness; good awareness specifically demands preserved capacity to self-reflect and shift-set. Results support Nelson and Narens' [1990. Metamemory: A theoretical framework and new findings. The Psychology of Learning and Motivation, 26, 125-173] model of metacognition by which awareness is founded on control (set-shifting) and monitoring (SR) processes. These specific factors could be targeted to improve insight in patients with otherwise unimpaired cognitive function.

  7. An Experimental Framework for Executing Applications in Dynamic Grid Environments

    NASA Technical Reports Server (NTRS)

    Huedo, Eduardo; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The Grid opens up opportunities for resource-starved scientists and engineers to harness highly distributed computing resources. A number of Grid middleware projects are currently available to support the simultaneous exploitation of heterogeneous resources distributed in different administrative domains. However, efficient job submission and management continue being far from accessible to ordinary scientists and engineers due to the dynamic and complex nature of the Grid. This report describes a new Globus framework that allows an easier and more efficient execution of jobs in a 'submit and forget' fashion. Adaptation to dynamic Grid conditions is achieved by supporting automatic application migration following performance degradation, 'better' resource discovery, requirement change, owner decision or remote resource failure. The report also includes experimental results of the behavior of our framework on the TRGP testbed.

  8. Building Software Agents for Planning, Monitoring, and Optimizing Travel

    DTIC Science & Technology

    2004-01-01

    defined as plans in the Theseus Agent Execution language (Barish et al. 2002). In the Web environment, sources can be quite slow and the latencies of...executor is based on a dataflow paradigm, actions are executed as soon as the data becomes available. Second, Theseus performs the actions in a...while Thesues provides an expressive language for defining information gathering and monitoring plans. The Theseus language supports capabilities

  9. Bilingual experience and resting-state brain connectivity: Impacts of L2 age of acquisition and social diversity of language use on control networks.

    PubMed

    Gullifer, Jason W; Chai, Xiaoqian J; Whitford, Veronica; Pivneva, Irina; Baum, Shari; Klein, Denise; Titone, Debra

    2018-05-01

    We investigated the independent contributions of second language (L2) age of acquisition (AoA) and social diversity of language use on intrinsic brain organization using seed-based resting-state functional connectivity among highly proficient French-English bilinguals. There were two key findings. First, earlier L2 AoA related to greater interhemispheric functional connectivity between homologous frontal brain regions, and to decreased reliance on proactive executive control in an AX-Continuous Performance Task completed outside the scanner. Second, greater diversity in social language use in daily life related to greater connectivity between the anterior cingulate cortex and the putamen bilaterally, and to increased reliance on proactive control in the same task. These findings suggest that early vs. late L2 AoA links to a specialized neural framework for processing two languages that may engage a specific type of executive control (e.g., reactive control). In contrast, higher vs. lower degrees of diversity in social language use link to a broadly distributed set of brain networks implicated in proactive control and context monitoring. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. blend4php: a PHP API for galaxy.

    PubMed

    Wytko, Connor; Soto, Brian; Ficklin, Stephen P

    2017-01-01

    Galaxy is a popular framework for execution of complex analytical pipelines typically for large data sets, and is a commonly used for (but not limited to) genomic, genetic and related biological analysis. It provides a web front-end and integrates with high performance computing resources. Here we report the development of the blend4php library that wraps Galaxy's RESTful API into a PHP-based library. PHP-based web applications can use blend4php to automate execution, monitoring and management of a remote Galaxy server, including its users, workflows, jobs and more. The blend4php library was specifically developed for the integration of Galaxy with Tripal, the open-source toolkit for the creation of online genomic and genetic web sites. However, it was designed as an independent library for use by any application, and is freely available under version 3 of the GNU Lesser General Public License (LPGL v3.0) at https://github.com/galaxyproject/blend4phpDatabase URL: https://github.com/galaxyproject/blend4php. © The Author(s) 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Application of Lightweight Formal Methods to Software Security

    NASA Technical Reports Server (NTRS)

    Gilliam, David P.; Powell, John D.; Bishop, Matt

    2005-01-01

    Formal specification and verification of security has proven a challenging task. There is no single method that has proven feasible. Instead, an integrated approach which combines several formal techniques can increase the confidence in the verification of software security properties. Such an approach which species security properties in a library that can be reused by 2 instruments and their methodologies developed for the National Aeronautics and Space Administration (NASA) at the Jet Propulsion Laboratory (JPL) are described herein The Flexible Modeling Framework (FMF) is a model based verijkation instrument that uses Promela and the SPIN model checker. The Property Based Tester (PBT) uses TASPEC and a Text Execution Monitor (TEM). They are used to reduce vulnerabilities and unwanted exposures in software during the development and maintenance life cycles.

  12. Smokefree Policies in Latin America and the Caribbean: Making Progress

    PubMed Central

    Sebrié, Ernesto M.; Schoj, Verónica; Travers, Mark J.; McGaw, Barbara; Glantz, Stanton A.

    2012-01-01

    We reviewed the adoption and implementation of smokefree policies in all Latin American and the Caribbean (LAC) countries. Significant progress has been achieved among LAC countries since the WHO Framework Convention on Tobacco Control (FCTC) was adopted in 2005. Both national and sub-national legislation have provided effective mechanisms to increase the fraction of the population protected from secondhand tobacco smoke. Civil society has actively promoted these policies and played a main role in enacting them and monitoring their enforcement. The tobacco industry, while continuing to oppose the approval and regulation of the laws at legislative and executive levels, has gone a step further by litigating against them in the Courts. As in the US and elsewhere, this litigation has failed to stop the legislation. PMID:22754484

  13. The ASSERT Virtual Machine Kernel: Support for Preservation of Temporal Properties

    NASA Astrophysics Data System (ADS)

    Zamorano, J.; de la Puente, J. A.; Pulido, J. A.; Urueña

    2008-08-01

    A new approach to building embedded real-time software has been developed in the ASSERT project. One of its key elements is the concept of a virtual machine preserving the non-functional properties of the system, and especially real-time properties, all the way down from high- level design models down to executable code. The paper describes one instance of the virtual machine concept that provides support for the preservation of temporal properties both at the source code level —by accept- ing only "legal" entities, i.e. software components with statically analysable real-tim behaviour— and at run-time —by monitoring the temporal behaviour of the system. The virtual machine has been validated on several pilot projects carried out by aerospace companies in the framework of the ASSERT project.

  14. WRF4SG: A Scientific Gateway for climate experiment workflows

    NASA Astrophysics Data System (ADS)

    Blanco, Carlos; Cofino, Antonio S.; Fernandez-Quiruelas, Valvanuz

    2013-04-01

    The Weather Research and Forecasting model (WRF) is a community-driven and public domain model widely used by the weather and climate communities. As opposite to other application-oriented models, WRF provides a flexible and computationally-efficient framework which allows solving a variety of problems for different time-scales, from weather forecast to climate change projection. Furthermore, WRF is also widely used as a research tool in modeling physics, dynamics, and data assimilation by the research community. Climate experiment workflows based on Weather Research and Forecasting (WRF) are nowadays among the one of the most cutting-edge applications. These workflows are complex due to both large storage and the huge number of simulations executed. In order to manage that, we have developed a scientific gateway (SG) called WRF for Scientific Gateway (WRF4SG) based on WS-PGRADE/gUSE and WRF4G frameworks to ease achieve WRF users needs (see [1] and [2]). WRF4SG provides services for different use cases that describe the different interactions between WRF users and the WRF4SG interface in order to show how to run a climate experiment. As WS-PGRADE/gUSE uses portlets (see [1]) to interact with users, its portlets will support these use cases. A typical experiment to be carried on by a WRF user will consist on a high-resolution regional re-forecast. These re-forecasts are common experiments used as input data form wind power energy and natural hazards (wind and precipitation fields). In the cases below, the user is able to access to different resources such as Grid due to the fact that WRF needs a huge amount of computing resources in order to generate useful simulations: * Resource configuration and user authentication: The first step is to authenticate on users' Grid resources by virtual organizations. After login, the user is able to select which virtual organization is going to be used by the experiment. * Data assimilation: In order to assimilate the data sources, the user has to select them browsing through LFC Portlet. * Design Experiment workflow: In order to configure the experiment, the user will define the type of experiment (i.e. re-forecast), and its attributes to simulate. In this case the main attributes are: the field of interest (wind, precipitation, ...), the start and end date simulation and the requirements of the experiment. * Monitor workflow: In order to monitor the experiment the user will receive notification messages based on events and also the gateway will display the progress of the experiment. * Data storage: Like Data assimilation case, the user is able to browse and view the output data simulations using LFC Portlet. The objectives of WRF4SG can be described by considering two goals. The first goal is to show how WRF4SG facilitates to execute, monitor and manage climate workflows based on the WRF4G framework. And the second goal of WRF4SG is to help WRF users to execute their experiment workflows concurrently using heterogeneous computing resources such as HPC and Grid. [1] Kacsuk, P.: P-GRADE portal family for grid infrastructures. Concurrency and Computation: Practice and Experience. 23, 235-245 (2011). [2] http://www.meteo.unican.es/software/wrf4g

  15. The Automated Instrumentation and Monitoring System (AIMS) reference manual

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Hontalas, Philip; Listgarten, Sherry

    1993-01-01

    Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).

  16. Designing an Easy-to-use Executive Conference Room Control System

    NASA Astrophysics Data System (ADS)

    Back, Maribeth; Golovchinsky, Gene; Qvarfordt, Pernilla; van Melle, William; Boreczky, John; Dunnigan, Tony; Carter, Scott

    The Usable Smart Environment project (USE) aims at designing easy-to-use, highly functional, next-generation conference rooms. Our first design prototype focuses on creating a “no wizards” room for an American executive; that is, a room the executive could walk into and use by himself, without help from a technologist. A key idea in the USE framework is that customization is one of the best ways to create a smooth user experience. As the system needs to fit both with the personal leadership style of the executive and the corporation’s meeting culture, we began the design process by exploring the work flow in and around meetings attended by the executive.

  17. An Integrated Extravehicular Activity Research Plan

    NASA Technical Reports Server (NTRS)

    Abercromby, Andrew F. J.; Ross, Amy J.; Cupples, J. Scott

    2016-01-01

    Multiple organizations within NASA and outside of NASA fund and participate in research related to extravehicular activity (EVA). In October 2015, representatives of the EVA Office, the Crew and Thermal Systems Division (CTSD), and the Human Research Program (HRP) at NASA Johnson Space Center agreed on a formal framework to improve multi-year coordination and collaboration in EVA research. At the core of the framework is an Integrated EVA Research Plan and a process by which it will be annually reviewed and updated. The over-arching objective of the collaborative framework is to conduct multi-disciplinary cost-effective research that will enable humans to perform EVAs safely, effectively, comfortably, and efficiently, as needed to enable and enhance human space exploration missions. Research activities must be defined, prioritized, planned and executed to comprehensively address the right questions, avoid duplication, leverage other complementary activities where possible, and ultimately provide actionable evidence-based results in time to inform subsequent tests, developments and/or research activities. Representation of all appropriate stakeholders in the definition, prioritization, planning and execution of research activities is essential to accomplishing the over-arching objective. A formal review of the Integrated EVA Research Plan will be conducted annually. External peer review of all HRP EVA research activities including compilation and review of published literature in the EVA Evidence Book is already performed annually. Coordination with stakeholders outside of the EVA Office, CTSD, and HRP is already in effect on a study-by-study basis; closer coordination on multi-year planning with other EVA stakeholders including academia is being actively pursued. Details of the current Integrated EVA Research Plan are presented including description of ongoing and planned research activities in the areas of: Benchmarking; Anthropometry and Suit Fit; Sensors; Human-Suit Modeling; Suit Trauma Monitoring and Countermeasures; EVA Workload and Duration Effects; Decompression Sickness Risk Mitigation; Deconditioned EVA Performance; and Exploration EVA Concept of Operations.

  18. Performance Metrics for Monitoring Parallel Program Executions

    NASA Technical Reports Server (NTRS)

    Sarukkai, Sekkar R.; Gotwais, Jacob K.; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Existing tools for debugging performance of parallel programs either provide graphical representations of program execution or profiles of program executions. However, for performance debugging tools to be useful, such information has to be augmented with information that highlights the cause of poor program performance. Identifying the cause of poor performance necessitates the need for not only determining the significance of various performance problems on the execution time of the program, but also needs to consider the effect of interprocessor communications of individual source level data structures. In this paper, we present a suite of normalized indices which provide a convenient mechanism for focusing on a region of code with poor performance and highlights the cause of the problem in terms of processors, procedures and data structure interactions. All the indices are generated from trace files augmented with data structure information.. Further, we show with the help of examples from the NAS benchmark suite that the indices help in detecting potential cause of poor performance, based on augmented execution traces obtained by monitoring the program.

  19. Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure

    DOE PAGES

    Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.

    2015-09-29

    In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less

  20. Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.

    In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less

  1. Alerting, orienting or executive attention networks: differential patters of pupil dilations

    PubMed Central

    Geva, Ronny; Zivan, Michal; Warsha, Aviv; Olchik, Dov

    2013-01-01

    Attention capacities, alerting responses, orienting to sensory stimulation, and executive monitoring of performance are considered independent yet interrelated systems. These operations play integral roles in regulating the behavior of diverse species along the evolutionary ladder. Each of the primary attention constructs—alerting, orienting, and executive monitoring—involves salient autonomic correlates as evidenced by changes in reactive pupil dilation (PD), heart rate, and skin conductance. Recent technological advances that use remote high-resolution recording may allow the discernment of temporo-spatial attributes of autonomic responses that characterize the alerting, orienting, and executive monitoring networks during free viewing, irrespective of voluntary performance. This may deepen the understanding of the roles of autonomic regulation in these mental operations and may deepen our understanding of behavioral changes in verbal as well as in non-verbal species. The aim of this study was to explore differences between psychosensory PD responses in alerting, orienting, and executive conflict monitoring tasks to generate estimates of concurrent locus coeruleus (LC) noradrenergic input trajectories in healthy human adults using the attention networks test (ANT). The analysis revealed a construct-specific pattern of pupil responses: alerting is characterized by an early component (Pa), its acceleration enables covert orienting, and executive control is evidenced by a prominent late component (Pe). PD characteristics seem to be task-sensitive, allowing exploration of mental operations irrespective of conscious voluntary responses. These data may facilitate development of studies designed to assess mental operations in diverse species using autonomic responses. PMID:24133422

  2. Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A)

    DTIC Science & Technology

    2016-03-01

    Document DAE - Defense Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full...Jun 2009 DT/OT Completion Jan 2015 Jan 2015 FDD Aug 2015 Oct 2015 FD TBD Oct 2015 Memo DCAPES is a National Security System. Acronyms and

  3. Measuring Progress Toward Universal Health Coverage: Does the Monitoring Framework of Bangladesh Need Further Improvement?

    PubMed

    Gupta, Rajat Das; Shahabuddin, Asm

    2018-01-08

    This review aimed to compare Bangladesh's Universal Health Coverage (UHC) monitoring framework with the global-level recommendations and to find out the existing gaps of Bangladesh's UHC monitoring framework compared to the global recommendations. In order to reach the aims of the review, we systematically searched two electronic databases - PubMed and Google Scholar - by using appropriate keywords to select articles that describe issues related to UHC and the monitoring framework of UHC applied globally and particularly in Bangladesh. Four relevant documents were found and synthesized. The review found that Bangladesh incorporated all of the recommendations suggested by the global monitoring framework regarding mentoring the financial risk protection and equity perspective. However, a significant gap in the monitoring framework related to service coverage was observed. Although Bangladesh has a significant burden of mental illnesses, cataract, and neglected tropical diseases, indicators related to these issues were absent in Bangladesh's UHC framework. Moreover, palliative-care-related indicators were completely missing in the framework. The results of this review suggest that Bangladesh should incorporate these indicators in their UHC monitoring framework in order to track the progress of the country toward UHC more efficiently and in a robust way.

  4. 78 FR 54454 - Open Meeting of the Information Security and Privacy Advisory Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... include the following items: --Cybersecurity Executive Order 13636, Improving Critical Infrastructure Cybersecurity (78 FR 11737, February 19, 2013); Development of New Cybersecurity Framework; Request for Information (RFI)--Developing a Framework to Improve Critical Infrastructure Cybersecurity (78 FR 13024...

  5. An acceleration framework for synthetic aperture radar algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.

    2017-04-01

    Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.

  6. Execution monitoring for a mobile robot system

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1990-01-01

    Due to sensor errors, uncertainty, incomplete knowledge, and a dynamic world, robot plans will not always be executed exactly as planned. This paper describes an implemented robot planning system that enhances the traditional sense-think-act cycle in ways that allow the robot system monitor its behavior and react in emergencies in real-time. A proposal on how robot systems can completely break away from the traditional three-step cycle is also made.

  7. Investigating executive functions in children with severe speech and movement disorders using structured tasks.

    PubMed

    Stadskleiv, Kristine; von Tetzchner, Stephen; Batorowicz, Beata; van Balkom, Hans; Dahlgren-Sandberg, Annika; Renner, Gregor

    2014-01-01

    Executive functions are the basis for goal-directed activity and include planning, monitoring, and inhibition, and language seems to play a role in the development of these functions. There is a tradition of studying executive function in both typical and atypical populations, and the present study investigates executive functions in children with severe speech and motor impairments who are communicating using communication aids with graphic symbols, letters, and/or words. There are few neuropsychological studies of children in this group and little is known about their cognitive functioning, including executive functions. It was hypothesized that aided communication would tax executive functions more than speech. Twenty-nine children using communication aids and 27 naturally speaking children participated. Structured tasks resembling everyday activities, where the action goals had to be reached through communication with a partner, were used to get information about executive functions. The children (a) directed the partner to perform actions like building a Lego tower from a model the partner could not see and (b) gave information about an object without naming it to a person who had to guess what object it was. The executive functions of planning, monitoring, and impulse control were coded from the children's on-task behavior. Both groups solved most of the tasks correctly, indicating that aided communicators are able to use language to direct another person to do a complex set of actions. Planning and lack of impulsivity was positively related to task success in both groups. The aided group completed significantly fewer tasks, spent longer time and showed more variation in performance than the comparison group. The aided communicators scored lower on planning and showed more impulsivity than the comparison group, while both groups showed an equal degree of monitoring of the work progress. The results are consistent with the hypothesis that aided language tax executive functions more than speech. The results may also indicate that aided communicators have less experience with these kinds of play activities. The findings broaden the perspective on executive functions and have implications for interventions for motor-impaired children developing aided communication.

  8. Investigating executive functions in children with severe speech and movement disorders using structured tasks

    PubMed Central

    Stadskleiv, Kristine; von Tetzchner, Stephen; Batorowicz, Beata; van Balkom, Hans; Dahlgren-Sandberg, Annika; Renner, Gregor

    2014-01-01

    Executive functions are the basis for goal-directed activity and include planning, monitoring, and inhibition, and language seems to play a role in the development of these functions. There is a tradition of studying executive function in both typical and atypical populations, and the present study investigates executive functions in children with severe speech and motor impairments who are communicating using communication aids with graphic symbols, letters, and/or words. There are few neuropsychological studies of children in this group and little is known about their cognitive functioning, including executive functions. It was hypothesized that aided communication would tax executive functions more than speech. Twenty-nine children using communication aids and 27 naturally speaking children participated. Structured tasks resembling everyday activities, where the action goals had to be reached through communication with a partner, were used to get information about executive functions. The children (a) directed the partner to perform actions like building a Lego tower from a model the partner could not see and (b) gave information about an object without naming it to a person who had to guess what object it was. The executive functions of planning, monitoring, and impulse control were coded from the children's on-task behavior. Both groups solved most of the tasks correctly, indicating that aided communicators are able to use language to direct another person to do a complex set of actions. Planning and lack of impulsivity was positively related to task success in both groups. The aided group completed significantly fewer tasks, spent longer time and showed more variation in performance than the comparison group. The aided communicators scored lower on planning and showed more impulsivity than the comparison group, while both groups showed an equal degree of monitoring of the work progress. The results are consistent with the hypothesis that aided language tax executive functions more than speech. The results may also indicate that aided communicators have less experience with these kinds of play activities. The findings broaden the perspective on executive functions and have implications for interventions for motor-impaired children developing aided communication. PMID:25249999

  9. Kwf-Grid workflow management system for Earth science applications

    NASA Astrophysics Data System (ADS)

    Tran, V.; Hluchy, L.

    2009-04-01

    In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.

  10. Design and implementation of the GLIF3 guideline execution engine.

    PubMed

    Wang, Dongwen; Peleg, Mor; Tu, Samson W; Boxwala, Aziz A; Ogunyemi, Omolola; Zeng, Qing; Greenes, Robert A; Patel, Vimla L; Shortliffe, Edward H

    2004-10-01

    We have developed the GLIF3 Guideline Execution Engine (GLEE) as a tool for executing guidelines encoded in the GLIF3 format. In addition to serving as an interface to the GLIF3 guideline representation model to support the specified functions, GLEE provides defined interfaces to electronic medical records (EMRs) and other clinical applications to facilitate its integration with the clinical information system at a local institution. The execution model of GLEE takes the "system suggests, user controls" approach. A tracing system is used to record an individual patient's state when a guideline is applied to that patient. GLEE can also support an event-driven execution model once it is linked to the clinical event monitor in a local environment. Evaluation has shown that GLEE can be used effectively for proper execution of guidelines encoded in the GLIF3 format. When using it to execute each guideline in the evaluation, GLEE's performance duplicated that of the reference systems implementing the same guideline but taking different approaches. The execution flexibility and generality provided by GLEE, and its integration with a local environment, need to be further evaluated in clinical settings. Integration of GLEE with a specific event-monitoring and order-entry environment is the next step of our work to demonstrate its use for clinical decision support. Potential uses of GLEE also include quality assurance, guideline development, and medical education.

  11. DAMT - DISTRIBUTED APPLICATION MONITOR TOOL (HP9000 VERSION)

    NASA Technical Reports Server (NTRS)

    Keith, B.

    1994-01-01

    Typical network monitors measure status of host computers and data traffic among hosts. A monitor to collect statistics about individual processes must be unobtrusive and possess the ability to locate and monitor processes, locate and monitor circuits between processes, and report traffic back to the user through a single application program interface (API). DAMT, Distributed Application Monitor Tool, is a distributed application program that will collect network statistics and make them available to the user. This distributed application has one component (i.e., process) on each host the user wishes to monitor as well as a set of components at a centralized location. DAMT provides the first known implementation of a network monitor at the application layer of abstraction. Potential users only need to know the process names of the distributed application they wish to monitor. The tool locates the processes and the circuit between them, and reports any traffic between them at a user-defined rate. The tool operates without the cooperation of the processes it monitors. Application processes require no changes to be monitored by this tool. Neither does DAMT require the UNIX kernel to be recompiled. The tool obtains process and circuit information by accessing the operating system's existing process database. This database contains all information available about currently executing processes. Expanding the information monitored by the tool can be done by utilizing more information from the process database. Traffic on a circuit between processes is monitored by a low-level LAN analyzer that has access to the raw network data. The tool also provides features such as dynamic event reporting and virtual path routing. A reusable object approach was used in the design of DAMT. The tool has four main components; the Virtual Path Switcher, the Central Monitor Complex, the Remote Monitor, and the LAN Analyzer. All of DAMT's components are independent, asynchronously executing processes. The independent processes communicate with each other via UNIX sockets through a Virtual Path router, or Switcher. The Switcher maintains a routing table showing the host of each component process of the tool, eliminating the need for each process to do so. The Central Monitor Complex provides the single application program interface (API) to the user and coordinates the activities of DAMT. The Central Monitor Complex is itself divided into independent objects that perform its functions. The component objects are the Central Monitor, the Process Locator, the Circuit Locator, and the Traffic Reporter. Each of these objects is an independent, asynchronously executing process. User requests to the tool are interpreted by the Central Monitor. The Process Locator identifies whether a named process is running on a monitored host and which host that is. The circuit between any two processes in the distributed application is identified using the Circuit Locator. The Traffic Reporter handles communication with the LAN Analyzer and accumulates traffic updates until it must send a traffic report to the user. The Remote Monitor process is replicated on each monitored host. It serves the Central Monitor Complex processes with application process information. The Remote Monitor process provides access to operating systems information about currently executing processes. It allows the Process Locator to find processes and the Circuit Locator to identify circuits between processes. It also provides lifetime information about currently monitored processes. The LAN Analyzer consists of two processes. Low-level monitoring is handled by the Sniffer. The Sniffer analyzes the raw data on a single, physical LAN. It responds to commands from the Analyzer process, which maintains the interface to the Traffic Reporter and keeps track of which circuits to monitor. DAMT is written in C-language for HP-9000 series computers running HP-UX and Sun 3 and 4 series computers running SunOS. DAMT requires 1Mb of disk space and 4Mb of RAM for execution. This package requires MIT's X Window System, Version 11 Revision 4, with OSF/Motif 1.1. The HP-9000 version (GSC-13589) includes sample HP-9000/375 and HP-9000/730 executables which were compiled under HP-UX, and the Sun version (GSC-13559) includes sample Sun3 and Sun4 executables compiled under SunOS. The standard distribution medium for the HP version of DAMT is a .25 inch HP pre-formatted streaming magnetic tape cartridge in UNIX tar format. It is also available on a 4mm magnetic tape in UNIX tar format. The standard distribution medium for the Sun version of DAMT is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. DAMT was developed in 1992.

  12. 78 FR 25254 - Announcing an Open Meeting of the Information Security and Privacy Advisory Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-30

    ... include the following items: --Cybersecurity Executive Order 13636, Improving Critical Infrastructure Cybersecurity (78 FR 11737, February 19, 2013); Development of New Cybersecurity Framework; Request for Information (RFI)--Developing a Framework to Improve Critical Infrastructure Cybersecurity (78 FR 13024...

  13. A journey of leadership: from bedside nurse to chief executive officer.

    PubMed

    Comack, Margret Tannis

    2012-01-01

    Understanding leadership from the inside out was a journey that spanned a 40-year career in health care. This article describes an individual's journey of becoming an effective executive leader using the LEADS in a caring environment--capabilities framework. This framework was recently developed in Canada and is now used broadly to understand the complexity and depth of health care leadership skills and challenges. The author utilizes the framework to explore leadership skill development from a personal perspective to a broader system transformation level. Challenges and successes along this journey are included to highlight the manner in which leadership evolves with experience, time, and determination. A retrospective view of a successful career in health care provides the model for others to consider a similar career path using a theoretical base and a thoughtful process of personal development.

  14. BioContainers: an open-source and community-driven framework for software standardization.

    PubMed

    da Veiga Leprevost, Felipe; Grüning, Björn A; Alves Aflitos, Saulo; Röst, Hannes L; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I; Perez-Riverol, Yasset

    2017-08-15

    BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). The software is freely available at github.com/BioContainers/. yperez@ebi.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  15. BioContainers: an open-source and community-driven framework for software standardization

    PubMed Central

    da Veiga Leprevost, Felipe; Grüning, Björn A.; Alves Aflitos, Saulo; Röst, Hannes L.; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C.; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I.; Perez-Riverol, Yasset

    2017-01-01

    Abstract Motivation BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). Availability and Implementation The software is freely available at github.com/BioContainers/. Contact yperez@ebi.ac.uk PMID:28379341

  16. An Extended Proof-Carrying Code Framework for Security Enforcement

    NASA Astrophysics Data System (ADS)

    Pirzadeh, Heidar; Dubé, Danny; Hamou-Lhadj, Abdelwahab

    The rapid growth of the Internet has resulted in increased attention to security to protect users from being victims of security threats. In this paper, we focus on security mechanisms that are based on Proof-Carrying Code (PCC) techniques. In a PCC system, a code producer sends a code along with its safety proof to the consumer. The consumer executes the code only if the proof is valid. Although PCC has been shown to be a useful security framework, it suffers from the sheer size of typical proofs -proofs of even small programs can be considerably large. In this paper, we propose an extended PCC framework (EPCC) in which, instead of the proof, a proof generator for the program in question is transmitted. This framework enables the execution of the proof generator and the recovery of the proof on the consumer's side in a secure manner using a newly created virtual machine called the VEP (Virtual Machine for Extended PCC).

  17. A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam

    In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.

  18. Strategic implementation and accountability: the case of the long-term care alliance.

    PubMed

    Seaman, Al; Elias, Maria; O'Neill, Bill; Yatabe, Karen

    2010-01-01

    A group of chief executives of long-term care homes formed an alliance in order to tap the resources residing within their management teams. Adopting a strategic implementation project based on a framework of accountability, the executives were able to better understand the uncertainties of the environment and potentially structure their strategic implementation to best use scarce resources. The framework of accountability allowed the homes to recognize the need for a strong business approach to long-term care. Communication improved throughout the organizations while systems and resources showed improved utilization. Quality became the driving force for all actions taken to move the organizations toward achieving their visions.

  19. Method and System for Controlling a Dexterous Robot Execution Sequence Using State Classification

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Quillin, Nathaniel (Inventor); Platt, Robert J., Jr. (Inventor); Pfeiffer, Joseph (Inventor); Permenter, Frank Noble (Inventor)

    2014-01-01

    A robotic system includes a dexterous robot and a controller. The robot includes a plurality of robotic joints, actuators for moving the joints, and sensors for measuring a characteristic of the joints, and for transmitting the characteristics as sensor signals. The controller receives the sensor signals, and is configured for executing instructions from memory, classifying the sensor signals into distinct classes via the state classification module, monitoring a system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the system state. A method for controlling the robot in the above system includes receiving the signals via the controller, classifying the signals using the state classification module, monitoring the present system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the present system state.

  20. Maintaining the Health of Software Monitors

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Rungta, Neha

    2013-01-01

    Software health management (SWHM) techniques complement the rigorous verification and validation processes that are applied to safety-critical systems prior to their deployment. These techniques are used to monitor deployed software in its execution environment, serving as the last line of defense against the effects of a critical fault. SWHM monitors use information from the specification and implementation of the monitored software to detect violations, predict possible failures, and help the system recover from faults. Changes to the monitored software, such as adding new functionality or fixing defects, therefore, have the potential to impact the correctness of both the monitored software and the SWHM monitor. In this work, we describe how the results of a software change impact analysis technique, Directed Incremental Symbolic Execution (DiSE), can be applied to monitored software to identify the potential impact of the changes on the SWHM monitor software. The results of DiSE can then be used by other analysis techniques, e.g., testing, debugging, to help preserve and improve the integrity of the SWHM monitor as the monitored software evolves.

  1. Repositioning Your EMBA Program and Reinventing Your Brand: A Case Study Analysis

    ERIC Educational Resources Information Center

    Petit, Francis

    2009-01-01

    The purpose of this research is to illustrate how Fordham University, the Jesuit University of New York, repositioned its Executive MBA Program and reinvented its brand, over a ten year period. More specifically, this research will analyze the current state of the Executive MBA market and will discuss the best practices and frameworks implemented…

  2. The Turnaround Mindset: Aligning Leadership for Student Success

    ERIC Educational Resources Information Center

    Fairchild, Tierney Temple; DeMary, Jo Lynne

    2011-01-01

    This book provides a valuable balance between what one must know and what one must do to turn around low-performing schools. The 3-E framework simplifies this complex process by focusing resources on the environment, the executive, and the execution of the turnaround plan. Central to each of these components is a spotlight on the values supporting…

  3. Job Loss at Mid-Life: Managers and Executives Face the "New Risk Economy"

    ERIC Educational Resources Information Center

    Mendenhall, Ruby; Kalil, Ariel; Spindel, Laurel J.; Hart, Cassandra M. D.

    2008-01-01

    We use a life course framework to examine how the "new risk economy" has left middle-age professionals, managers and executives more vulnerable to job loss and unemployment despite high levels of human capital. Using in-depth qualitative data from 77 recently-unemployed white-collar workers, we examine perceptions of macro-economic…

  4. 78 FR 5525 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-25

    ... with another complex order on the Exchange. \\3\\ See ISE Rule 722(a)(1). Rule 720 provides a framework... deemed to have occurred when the execution price of a transaction is higher or lower than the theoretical... criteria when determining the theoretical price of an options execution, which is enumerated in ISE Rule...

  5. Assessing corporate restructurings in the electric utility industry: A framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malko, J.R.

    1996-12-31

    Corporate restructurings of electric utilities in the United States have become an important and controversial issue during the 1980s. Regulators and electric utility executives have different perspectives concerning corporate restructurings associated with diversification, mergers, and functional separation of generation, transmission, and distribution. Regulators attempt to regulate electric utilities effectively in order to assure that adequate electricity services are provided at reasonable cost and to protect the public interest which includes considering choices and risks to customers. Regulators are considering and developing new regulatory approaches in order to address corporate restructurings and balance regulation and competitive pressures. Electric utility executives typicallymore » view corporate restructurings as a potential partial solution to financial challenges and problems and are analyzing corporate restructuring activities within the framework of the corporate strategic planning process. Executives attempt to find new sources of economic value and consider risks and potential returns to investors in an increasingly competitive environment. The parent holding company is generally used as the basic corporate form for restructuring activities in the electric utility industry. However, the wholly-owned utility subsidiary structure remains in use for some restructurings. The primary purpose of this paper is to propose a framework to assess corporate restructurings in the electric utility industry from a public policy perspective. This paper is organized in the following manner. First, different types of corporate restructurings in the electric utility industry are examined. Second, reasons for corporate restructuring activities are represented. Third, a framework for assessing corporate restructuring activities is proposed. Fourth, the application of the framework is discussed.« less

  6. Measuring Progress Toward Universal Health Coverage: Does the Monitoring Framework of Bangladesh Need Further Improvement?

    PubMed Central

    Shahabuddin, ASM

    2018-01-01

    This review aimed to compare Bangladesh’s Universal Health Coverage (UHC) monitoring framework with the global-level recommendations and to find out the existing gaps of Bangladesh’s UHC monitoring framework compared to the global recommendations. In order to reach the aims of the review, we systematically searched two electronic databases - PubMed and Google Scholar - by using appropriate keywords to select articles that describe issues related to UHC and the monitoring framework of UHC applied globally and particularly in Bangladesh. Four relevant documents were found and synthesized. The review found that Bangladesh incorporated all of the recommendations suggested by the global monitoring framework regarding mentoring the financial risk protection and equity perspective. However, a significant gap in the monitoring framework related to service coverage was observed. Although Bangladesh has a significant burden of mental illnesses, cataract, and neglected tropical diseases, indicators related to these issues were absent in Bangladesh’s UHC framework. Moreover, palliative-care-related indicators were completely missing in the framework. The results of this review suggest that Bangladesh should incorporate these indicators in their UHC monitoring framework in order to track the progress of the country toward UHC more efficiently and in a robust way. PMID:29541562

  7. TOPPE: A framework for rapid prototyping of MR pulse sequences.

    PubMed

    Nielsen, Jon-Fredrik; Noll, Douglas C

    2018-06-01

    To introduce a framework for rapid prototyping of MR pulse sequences. We propose a simple file format, called "TOPPE", for specifying all details of an MR imaging experiment, such as gradient and radiofrequency waveforms and the complete scan loop. In addition, we provide a TOPPE file "interpreter" for GE scanners, which is a binary executable that loads TOPPE files and executes the sequence on the scanner. We also provide MATLAB scripts for reading and writing TOPPE files and previewing the sequence prior to hardware execution. With this setup, the task of the pulse sequence programmer is reduced to creating TOPPE files, eliminating the need for hardware-specific programming. No sequence-specific compilation is necessary; the interpreter only needs to be compiled once (for every scanner software upgrade). We demonstrate TOPPE in three different applications: k-space mapping, non-Cartesian PRESTO whole-brain dynamic imaging, and myelin mapping in the brain using inhomogeneous magnetization transfer. We successfully implemented and executed the three example sequences. By simply changing the various TOPPE sequence files, a single binary executable (interpreter) was used to execute several different sequences. The TOPPE file format is a complete specification of an MR imaging experiment, based on arbitrary sequences of a (typically small) number of unique modules. Along with the GE interpreter, TOPPE comprises a modular and flexible platform for rapid prototyping of new pulse sequences. Magn Reson Med 79:3128-3134, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Building a Framework that Supports Project Teams: An Example from a University/Community Collaborative Initiative

    ERIC Educational Resources Information Center

    Kolb, Judith A.; Sandmeyer, Louise E.

    2007-01-01

    In the university initiative described in this article, a series of project teams were funded to work on a variety of collaborative projects. The focus of this piece is on the framework that was developed and executed to select, support, and evaluate these teams. The framework is explained and described using data gathered throughout the study and…

  9. Educational Objectives: The Why Matters

    DTIC Science & Technology

    2013-03-01

    Management Science 18, no. 2, (Oct., 1971): 28-30; Manfred F. R. Kets de Vries , “Decoding the Team Conundrum: The Eight Roles Executives Play...Examples of the behavioral role frameworks are Minzberg’s, Kets De Vries and Hart and Quinn’s.70 Examples of preferential role frameworks are...frameworks Minzburg Kets de Vries Hart & Quinn Belbin Keirsey Von Oech Figurehead Strategist Vision Setter Plant Rational Explorer Leader Change

  10. Organizational Use of a Framework for Innovation Adoption

    DTIC Science & Technology

    2011-09-01

    in current processes , the eight practices identified by Denning and Dunham’s The Innovator’s Way, Essential Practices For Successful Innovation (2010...framework for identifying gaps in current processes , the eight practices identified by Denning and Dunham’s The Innovator’s Way, Essential Practices For...60 2. Methods to Use within the Eight Practice Framework ..................63 a. Marine Corps Planning Process (MCPP) for Executing

  11. Executive High School Internship Program.

    ERIC Educational Resources Information Center

    Duperrault, JoAnn Hunter

    1992-01-01

    The Executive High School Internship Program in Tampa, Florida, involves gifted and talented high school seniors working for a semester as nonpaid administrative assistants in public or private sector organizations. The program's history, recruitment policies, placement practices, and monitoring are reviewed. (DB)

  12. The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)

    1997-01-01

    Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.

  13. Age-Related Differences in Listening Effort During Degraded Speech Recognition.

    PubMed

    Ward, Kristina M; Shen, Jing; Souza, Pamela E; Grieco-Calub, Tina M

    The purpose of the present study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Twenty-five younger adults (YA; 18-24 years) and 21 older adults (OA; 56-82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants' responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners' performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (single task vs. dual task); and (3) per group (YA vs. OA). Speech recognition declined with increasing spectral degradation for both YA and OA when they performed the task in isolation or concurrently with the visual monitoring task. OA were slower and less accurate than YA on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared with single-task performance, OA experienced greater declines in secondary-task accuracy, but not reaction time, than YA. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. OA experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than YA. These findings are interpreted as suggesting that OA expended greater listening effort than YA, which may be partially attributed to age-related differences in executive control.

  14. 5 CFR 430.306 - Monitoring performance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Monitoring performance. 430.306 Section 430.306 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERFORMANCE MANAGEMENT Managing Senior Executive Performance § 430.306 Monitoring performance. (a) Supervisors must...

  15. 5 CFR 430.306 - Monitoring performance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Monitoring performance. 430.306 Section 430.306 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERFORMANCE MANAGEMENT Managing Senior Executive Performance § 430.306 Monitoring performance. (a) Supervisors must...

  16. 5 CFR 430.306 - Monitoring performance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Monitoring performance. 430.306 Section 430.306 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERFORMANCE MANAGEMENT Managing Senior Executive Performance § 430.306 Monitoring performance. (a) Supervisors must...

  17. 5 CFR 430.306 - Monitoring performance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Monitoring performance. 430.306 Section 430.306 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERFORMANCE MANAGEMENT Managing Senior Executive Performance § 430.306 Monitoring performance. (a) Supervisors must...

  18. 5 CFR 430.306 - Monitoring performance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Monitoring performance. 430.306 Section 430.306 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERFORMANCE MANAGEMENT Managing Senior Executive Performance § 430.306 Monitoring performance. (a) Supervisors must...

  19. Academic procrastination in college students: the role of self-reported executive function.

    PubMed

    Rabin, Laura A; Fogel, Joshua; Nutter-Upham, Katherine E

    2011-03-01

    Procrastination, or the intentional delay of due tasks, is a widespread phenomenon in college settings. Because procrastination can negatively impact learning, achievement, academic self-efficacy, and quality of life, research has sought to understand the factors that produce and maintain this troublesome behavior. Procrastination is increasingly viewed as involving failures in self-regulation and volition, processes commonly regarded as executive functions. The present study was the first to investigate subcomponents of self-reported executive functioning associated with academic procrastination in a demographically diverse sample of college students aged 30 years and below (n = 212). We included each of nine aspects of executive functioning in multiple regression models that also included various demographic and medical/psychiatric characteristics, estimated IQ, depression, anxiety, neuroticism, and conscientiousness. The executive function domains of initiation, plan/organize, inhibit, self-monitor, working memory, task monitor, and organization of materials were significant predictors of academic procrastination in addition to increased age and lower conscientiousness. Results enhance understanding of the neuropsychological correlates of procrastination and may lead to practical suggestions or interventions to reduce its harmful effects on students' academic performance and well-being.

  20. Parallelization of the TRIGRS model for rainfall-induced landslides using the message passing interface

    USGS Publications Warehouse

    Alvioli, M.; Baum, R.L.

    2016-01-01

    We describe a parallel implementation of TRIGRS, the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model for the timing and distribution of rainfall-induced shallow landslides. We have parallelized the four time-demanding execution modes of TRIGRS, namely both the saturated and unsaturated model with finite and infinite soil depth options, within the Message Passing Interface framework. In addition to new features of the code, we outline details of the parallel implementation and show the performance gain with respect to the serial code. Results are obtained both on commercial hardware and on a high-performance multi-node machine, showing the different limits of applicability of the new code. We also discuss the implications for the application of the model on large-scale areas and as a tool for real-time landslide hazard monitoring.

  1. 8. The operational processes.

    PubMed

    2014-05-01

    There are two principal directions that disaster studies pursue: (1) interventional; and (2) noninterventional. Interventional studies are used to evaluate specific responses as to their effectiveness in meeting their respective objectives, their contribution to the overarching goal, the efficiency with which they are able to achieve their objectives, other effects created, and their respective costs. On the other hand, noninterventional studies examine the epidemiology of disasters and for the most part are observational. Both interventional and noninterventional studies require data/information obtained from assessments. This section of these Guidelines examines the operational framework used to study interventions/responses and includes the following processes: (1) assessments, (2) identification of needs; (3) strategic planning; (4) selection of intervention(s); (5) operational planning; (6) execution of interventions; and (7) monitoring and evaluation of effects and changes in levels of functions resulting from the intervention(s) being studied.

  2. Event-Driven Technology to Generate Relevant Collections of Near-Realtime Data

    NASA Astrophysics Data System (ADS)

    Graves, S. J.; Keiser, K.; Nair, U. S.; Beck, J. M.; Ebersole, S.

    2017-12-01

    Getting the right data when it is needed continues to be a challenge for researchers and decision makers. Event-Driven Data Delivery (ED3), funded by the NASA Applied Science program, is a technology that allows researchers and decision makers to pre-plan what data, information and processes they need to have collected or executed in response to future events. The Information Technology and Systems Center at the University of Alabama in Huntsville (UAH) has developed the ED3 framework in collaboration with atmospheric scientists at UAH, scientists at the Geological Survey of Alabama, and other federal, state and local stakeholders to meet the data preparedness needs for research, decisions and situational awareness. The ED3 framework supports an API that supports the addition of loosely-coupled, distributed event handlers and data processes. This approach allows the easy addition of new events and data processes so the system can scale to support virtually any type of event or data process. Using ED3's underlying services, applications have been developed that monitor for alerts of registered event types and automatically triggers subscriptions that match new events, providing users with a living "album" of results that can continued to be curated as more information for an event becomes available. This capability can allow users to improve capacity for the collection, creation and use of data and real-time processes (data access, model execution, product generation, sensor tasking, social media filtering, etc), in response to disaster (and other) events by preparing in advance for data and information needs for future events. This presentation will provide an update on the ED3 developments and deployments, and further explain the applicability for utilizing near-realtime data in hazards research, response and situational awareness.

  3. Sustained Engagement with a Single Community Partner

    ERIC Educational Resources Information Center

    Lear, Darcy W.; Sanchez, Alejandro

    2013-01-01

    As scholarly work has recently turned its attention to the role of the community partner in Community Service-Learning (CSL) relationships, empirical frameworks for describing and executing community partnerships have emerged. This article applies those frameworks to one such partnership, which is presented from the perspective of both the…

  4. The ATLAS PanDA Monitoring System and its Evolution

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  5. IDEA: Planning at the Core of Autonomous Reactive Agents

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Several successful autonomous systems are separated into technologically diverse functional layers operating at different levels of abstraction. This diversity makes them difficult to implement and validate. In this paper, we present IDEA (Intelligent Distributed Execution Architecture), a unified planning and execution framework. In IDEA a layered system can be implemented as separate agents, one per layer, each representing its interactions with the world in a model. At all levels, the model representation primitives and their semantics is the same. Moreover, each agent relies on a single model, plan database, plan runner and on a variety of planners, both reactive and deliberative. The framework allows the specification of agents that operate, within a guaranteed reaction time and supports flexible specification of reactive vs. deliberative agent behavior. Within the IDEA framework we are working to fully duplicate the functionalities of the DS1 Remote Agent and extend it to domains of higher complexity than autonomous spacecraft control.

  6. Selecting Senior Civilian Leaders in the Army

    DTIC Science & Technology

    1992-04-01

    to be successful. The Office of Personnel Management and the Army Research Institute have both been working in this area. The study recommends the...the manager -subordinate relationship. 1992 Executive Research Project S43 Selecting Senior Civilian Leaders in the Army Barbara Heffernan Department...The Office of Personnel Management 21 The Manaaement Excellence Framework 27 The Management Excellence Inventory 33 Executive Development Programs 35

  7. Back from the Brink: How a Bold Vision and a Focus on Resources Can Drive System Improvement. Executive Summary

    ERIC Educational Resources Information Center

    Education Resource Strategies, 2015

    2015-01-01

    This executive summary describes a case study that implemented the framework School System 20/20 to examine how Lawrence Public Schools (Massachusetts) is transforming its policies and structures to better align resources with student and teacher needs. Education Resource Strategies' (ERS') School System 20/20 is a set of conditions and practices…

  8. Why information security belongs on the CFO's agenda.

    PubMed

    Quinnild, James; Fusile, Jeff; Smith, Cindy

    2006-02-01

    Healthcare financial executives need to understand the complex and growing role of information security in supporting the business of health care. The biggest security gaps in healthcare organizations occur in strategy and centralization, business executive preparation, and protected health information. CFOs should collaborate with the CIO in engaging a comprehensive framework to develop, implement, communicate, and maintain an enterprisewide information security strategy.

  9. 75 FR 74755 - Self-Regulatory Organizations; Notice of Filing of Proposed Rule Change by NASDAQ OMX PHLX LLC...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-01

    ... contingent orders and the rules that apply to such executions. Rule 1092 provides a framework for reviewing... execution price of a transaction is higher or lower than the theoretical price for a series by a certain amount depending on the type of option. OEOs use one of three criteria when determining the theoretical...

  10. Renewal and change for health care executives.

    PubMed

    Burke, G C; Bice, M O

    1991-01-01

    Health care executives must consider renewal and change within their own lives if they are to breathe life into their own institutions. Yet numerous barriers to executive renewal exist, including time pressures, fatigue, cultural factors, and trustee attitudes. This essay discusses such barriers and suggests approaches that health care executives may consider for programming renewal into their careers. These include self-assessment for professional and personal goals, career or job change, process vs. outcome considerations, solitude, networking, lifelong education, surrounding oneself with change agents, business travel and sabbaticals, reading outside the field, physical exercise, mentoring, learning from failures, a sense of humor, spiritual reflection, and family and friends. Renewal is a continuous, lifelong process requiring constant learning. Individual executives would do well to develop a framework for renewal in their careers and organizations.

  11. Does Mind Wandering Reflect Executive Function or Executive Failure? Comment on Smallwood and Schooler (2006) and Watkins (2008)

    PubMed Central

    McVay, Jennifer C.; Kane, Michael J.

    2010-01-01

    In this Comment, we contrast different conceptions of mind wandering that were presented in two recent theoretical reviews: Smallwood and Schooler (2006) and Watkins (2008). We also introduce a new perspective on the role of executive control in mind wandering by integrating empirical evidence presented in Smallwood and Schooler (2006) with two theoretical frameworks: Watkins’s (2008) elaborated control theory and Klinger’s (1971; 2009) current concerns theory. In contrast to the Smallwood-Schooler claim that mind-wandering recruits executive resources, we argue that mind wandering represents a failure of executive control and that it is dually determined by the presence of automatically generated thoughts in response to environmental and mental cues and the ability of the executive-control system to deal with this interference. We present empirical support for this view from experimental, neuroimaging, and individual-differences research. PMID:20192557

  12. Intelligent Rover Execution for Detecting Life in the Atacama Desert

    NASA Technical Reports Server (NTRS)

    Baskaran, Vijayakumar; Muscettola, Nicola; Rijsman, David; Plaunt, Chris; Fry, Chuck

    2006-01-01

    On-board supervisory execution is crucial for the deployment of more capable and autonomous remote explorers. Planetary science is considering robotic explorers operating for long periods of time without ground supervision while interacting with a changing and often hostile environment. Effective and robust operations require on-board supervisory control with a high level of awareness of the principles of functioning of the environment and of the numerous internal subsystems that need to be coordinated. We describe an on-board rover executive that was deployed on a rover as past of the "Limits of Life in the Atacama Desert (LITA)" field campaign sponsored by the NASA ASTEP program. The executive was built using the Intelligent Distributed Execution Architecture (IDEA), an execution framework that uses model-based and plan-based supervisory control of its fundamental computational paradigm. We present the results of the third field experiment conducted in the Atacama desert (Chile) in August - October 2005.

  13. How do emotion and motivation direct executive control?

    PubMed

    Pessoa, Luiz

    2009-04-01

    Emotion and motivation have crucial roles in determining human behavior. Yet, how they interact with cognitive control functions is less understood. Here, the basic elements of a conceptual framework for understanding how they interact are introduced. More broadly, the 'dual competition' framework proposes that emotion and motivation affect both perceptual and executive competition. In particular, the anterior cingulate cortex is hypothesized to be engaged in attentional/effortful control mechanisms and to interact with several other brain structures, including the amygdala and nucleus accumbens, in integrating affectively significant signals with control signals in prefrontal cortex. An implication of the proposal is that emotion and motivation can either enhance or impair behavioral performance depending on how they interact with control functions.

  14. FPGA implemented testbed in 8-by-8 and 2-by-2 OFDM-MIMO channel estimation and design of baseband transceiver.

    PubMed

    Ramesh, S; Seshasayanan, R

    2016-01-01

    In this study, a baseband OFDM-MIMO framework with channel timing and estimation synchronization is composed and executed utilizing the FPGA innovation. The framework is prototyped in light of the IEEE 802.11a standard and the signals transmitted and received utilizing a data transmission of 20 MHz. With the assistance of the QPSK tweak, the framework can accomplish a throughput of 24 Mbps. Besides, the LS formula is executed and the estimation of a frequency-specific fading channel is illustrated. For the rough estimation of timing, MNC plan is examined and actualized. Above all else, the whole framework is demonstrated in MATLAB and a drifting point model is set up. At that point, the altered point model is made with the assistance of Simulink and Xilinx's System Generator for DSP. In this way, the framework is incorporated and actualized inside of Xilinx's ISE tools and focused to Xilinx Virtex 5 board. In addition, an equipment co-simulation is contrived to decrease the preparing time while figuring the BER of the fixed point model. The work concentrates on above all else venture for further examination of planning creative channel estimation strategies towards applications in the fourth era (4G) mobile correspondence frameworks.

  15. Working Memory in Children With Neurocognitive Effects From Sickle Cell Disease: Contributions of the Central Executive and Processing Speed

    PubMed Central

    Smith, Kelsey E.; Schatz, Jeffrey

    2017-01-01

    Children with sickle cell disease (SCD) are at risk for working memory deficits due to multiple disease processes. We assessed working memory abilities and related functions in 32 school-age children with SCD and 85 matched comparison children using Baddeley’s working memory model as a framework. Children with SCD performed worse than controls for working memory, central executive function, and processing/rehearsal speed. Central executive function was found to mediate the relationship between SCD status and working memory, but processing speed did not. Cognitive remediation strategies that focus on central executive processes may be important for remediating working memory deficits in SCD. PMID:27759435

  16. A task scheduler framework for self-powered wireless sensors.

    PubMed

    Nordman, Mikael M

    2003-10-01

    The cost and inconvenience of cabling is a factor limiting widespread use of intelligent sensors. Recent developments in short-range, low-power radio seem to provide an opening to this problem, making development of wireless sensors feasible. However, for these sensors the energy availability is a main concern. The common solution is either to use a battery or to harvest ambient energy. The benefit of harvested ambient energy is that the energy feeder can be considered as lasting a lifetime, thus it saves the user from concerns related to energy management. The problem is, however, the unpredictability and unsteady behavior of ambient energy sources. This becomes a main concern for sensors that run multiple tasks at different priorities. This paper proposes a new scheduler framework that enables the reliable assignment of task priorities and scheduling in sensors powered by ambient energy. The framework being based on environment parameters, virtual queues, and a state machine with transition conditions, dynamically manages task execution according to priorities. The framework is assessed in a test system powered by a solar panel. The results show the functionality of the framework and how task execution reliably is handled without violating the priority scheme that has been assigned to it.

  17. 17 CFR 37.406 - Trade reconstruction.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 1 2014-04-01 2014-04-01 false Trade reconstruction. 37.406 Section 37.406 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION SWAP EXECUTION FACILITIES Monitoring of Trading and Trade Processing § 37.406 Trade reconstruction. The swap execution...

  18. Auditory Hallucinations in Schizophrenia and Nonschizophrenia Populations: A Review and Integrated Model of Cognitive Mechanisms

    PubMed Central

    Waters, Flavie; Allen, Paul; Aleman, André; Fernyhough, Charles; Woodward, Todd S.; Badcock, Johanna C.; Barkus, Emma; Johns, Louise; Varese, Filippo; Menon, Mahesh; Vercammen, Ans; Larøi, Frank

    2012-01-01

    While the majority of cognitive studies on auditory hallucinations (AHs) have been conducted in schizophrenia (SZ), an increasing number of researchers are turning their attention to different clinical and nonclinical populations, often using SZ findings as a model for research. Recent advances derived from SZ studies can therefore be utilized to make substantial progress on AH research in other groups. The objectives of this article were to (1) present an up-to-date review regarding the cognitive mechanisms of AHs in SZ, (2) review findings from cognitive research conducted in other clinical and nonclinical groups, and (3) integrate these recent findings into a cohesive framework. First, SZ studies show that the cognitive underpinnings of AHs include self-source-monitoring deficits and executive and inhibitory control dysfunctions as well as distortions in top-down mechanisms, perceptual and linguistic processes, and emotional factors. Second, consistent with SZ studies, findings in other population groups point to the role of top-down processing, abnormalities in executive inhibition, and negative emotions. Finally, we put forward an integrated model of AHs that incorporates the above findings. We suggest that AHs arise from an interaction between abnormal neural activation patterns that produce salient auditory signals and top-down mechanisms that include signal detection errors, executive and inhibition deficits, a tapestry of expectations and memories, and state characteristics that influence how these experiences are interpreted. Emotional factors play a particular prominent role at all levels of this hierarchy. Our model is distinctively powerful in explaining a range of phenomenological characteristics of AH across a spectrum of disorders. PMID:22446568

  19. Mastering the management system.

    PubMed

    Kaplan, Robert S; Norton, David P

    2008-01-01

    Companies have always found it hard to balance pressing operational concerns with long-term strategic priorities. The tension is critical: World-class processes won't lead to success without the right strategic direction, and the best strategy in the world will get nowhere without strong operations to execute it. In this article, Kaplan, of Harvard Business School, and Norton, founder and director of the Palladium Group, explain how to effectively manage both strategy and operations by linking them tightly in a closed-loop management system. The system comprises five stages, beginning with strategy development, which springs from a company's mission, vision, and value statements, and from an analysis of its strengths, weaknesses, and competitive environment. In the next stage, managers translate the strategy into objectives and initiatives with strategy maps, which organize objectives by themes, and balanced scorecards, which link objectives to performance metrics. Stage three involves creating an operational plan to accomplish the objectives and initiatives; it includes targeting process improvements and preparing sales, resource, and capacity plans and dynamic budgets. Managers then put plans into action, monitoring their effectiveness in stage four. They review operational, environmental, and competitive data; assess progress; and identify barriers to execution. In the final stage, they test the strategy, analyzing cost, profitability, and correlations between strategy and performance. If their underlying assumptions appear faulty, they update the strategy, beginning another loop. The authors present not only a comprehensive blueprint for successful strategy execution but also a managerial tool kit, illustrated with examples from HSBC Rail, Cigna Property and Casualty, and Store 24. The kit incorporates leading management experts' frameworks, outlining where they fit into the management cycle.

  20. A Generalized-Compliant-Motion Primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1993-01-01

    Computer program bridges gap between planning and execution of compliant robotic motions developed and installed in control system of telerobot. Called "generalized-compliant-motion primitive," one of several task-execution-primitive computer programs, which receives commands from higher-level task-planning programs and executes commands by generating required trajectories and applying appropriate control laws. Program comprises four parts corresponding to nominal motion, compliant motion, ending motion, and monitoring. Written in C language.

  1. Self-reports of executive dysfunction in current ecstasy/polydrug Users.

    PubMed

    Hadjiefthyvoulou, Florentia; Fisk, John E; Montgomery, Catharine; Bridges, Nikola

    2012-09-01

    Ecstasy/polydrug users have exhibited deficits in executive functioning in laboratory tests. We sought to extend these findings by investigating the extent to which ecstasy/polydrug users manifest executive deficits in everyday life. Forty-two current ecstasy/polydrug users, 18 previous (abstinent for at least 6 months) ecstasy/polydrug users, and 50 non-users of ecstasy (including both non-users of any illicit drug and some cannabis-only users) completed the self-report Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A) measure. Current ecstasy/polydrug users performed significantly worse than previous users and non-users on subscales measuring inhibition, self-monitoring, initiating action, working memory, planning, monitoring ongoing task performance, and organizational ability. Previous ecstasy/polydrug users did not differ significantly from non-users. In regression analyses, although the current frequency of ecstasy use accounted for statistically significant unique variance on 3 of the 9 BRIEF-A subscales, daily cigarette consumption was the main predictor in 6 of the subscales. Current ecstasy/polydrug users report more executive dysfunction than do previous users and non-users. This finding appears to relate to some aspect of ongoing ecstasy use and seems largely unrelated to the use of other illicit drugs. An unexpected finding was the association of current nicotine consumption with executive dysfunction.

  2. Exploring Two Approaches for an End-to-End Scientific Analysis Workflow

    NASA Astrophysics Data System (ADS)

    Dodelson, Scott; Kent, Steve; Kowalkowski, Jim; Paterno, Marc; Sehrish, Saba

    2015-12-01

    The scientific discovery process can be advanced by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different components on resources that are best suited to run those components. In addition, we need to monitor the status of the workflow as components get scheduled and executed, and to access the intermediate and final output for visual exploration and analysis. Finally, it is important for scientists to be able to share their workflows with collaborators. We have explored two approaches for such an analysis framework for the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC); the first one is based on the use and extension of Galaxy, a web-based portal for biomedical research, and the second one is based on a programming language, Python. In this paper, we present a brief description of the two approaches, describe the kinds of extensions to the Galaxy system we have found necessary in order to support the wide variety of scientific analysis in the cosmology community, and discuss how similar efforts might be of benefit to the HEP community.

  3. Executive functions in adults with developmental dyslexia.

    PubMed

    Smith-Spark, James H; Henry, Lucy A; Messer, David J; Edvardsdottir, Elisa; Zięcik, Adam P

    2016-01-01

    Executive functioning (EF) deficits are well recognized in developmental dyslexia, yet the majority of studies have concerned children rather than adults, ignored the subjective experience of the individual with dyslexia (with regard to their own EFs), and have not followed current theoretical perspectives on EFs. The current study addressed these shortfalls by administering a self-report measure of EF (BRIEF-A; Roth, Isquith, & Gioia, 2005) and experimental tasks to IQ-matched groups of adults with and without dyslexia. The laboratory-based tasks tested the three factors constituting the framework of EF proposed by Miyake et al. (2000). In comparison to the group without dyslexia, the participants with dyslexia self-reported more frequent EF problems in day-to-day life, with these difficulties centering on metacognitive processes (working memory, planning, task monitoring, and organization) rather than on the regulation of emotion and behaviour. The participants with dyslexia showed significant deficits in EF (inhibition, set shifting, and working memory). The findings indicated that dyslexia-related problems have an impact on the daily experience of adults with the condition. Further, EF difficulties are present in adulthood across a range of laboratory-based measures, and, given the nature of the experimental tasks presented, extend beyond difficulties related solely to phonological processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Architectures of small satellite programs in developing countries

    NASA Astrophysics Data System (ADS)

    Wood, Danielle; Weigel, Annalisa

    2014-04-01

    Global participation in space activity is growing as satellite technology matures and spreads. Countries in Africa, Asia and Latin America are creating or reinvigorating national satellite programs. These countries are building local capability in space through technological learning. This paper analyzes implementation approaches in small satellite programs within developing countries. The study addresses diverse examples of approaches used to master, adapt, diffuse and apply satellite technology in emerging countries. The work focuses on government programs that represent the nation and deliver services that provide public goods such as environmental monitoring. An original framework developed by the authors examines implementation approaches and contextual factors using the concept of Systems Architecture. The Systems Architecture analysis defines the satellite programs as systems within a context which execute functions via forms in order to achieve stakeholder objectives. These Systems Architecture definitions are applied to case studies of six satellite projects executed by countries in Africa and Asia. The architectural models used by these countries in various projects reveal patterns in the areas of training, technical specifications and partnership style. Based on these patterns, three Archetypal Project Architectures are defined which link the contextual factors to the implementation approaches. The three Archetypal Project Architectures lead to distinct opportunities for training, capability building and end user services.

  5. Executing CLIPS expert systems in a distributed environment

    NASA Technical Reports Server (NTRS)

    Taylor, James; Myers, Leonard

    1990-01-01

    This paper describes a framework for running cooperating agents in a distributed environment to support the Intelligent Computer Aided Design System (ICADS), a project in progress at the CAD Research Unit of the Design Institute at the California Polytechnic State University. Currently, the systems aids an architectural designer in creating a floor plan that satisfies some general architectural constraints and project specific requirements. At the core of ICADS is the Blackboard Control System. Connected to the blackboard are any number of domain experts called Intelligent Design Tools (IDT). The Blackboard Control System monitors the evolving design as it is being drawn and helps resolve conflicts from the domain experts. The user serves as a partner in this system by manipulating the floor plan in the CAD system and validating recommendations made by the domain experts. The primary components of the Blackboard Control System are two expert systems executed by a modified CLIPS shell. The first is the Message Handler. The second is the Conflict Resolver. The Conflict Resolver synthesizes the suggestions made by the domain experts, which can be either CLIPS expert systems, or compiled C programs. In DEMO1, the current ICADS prototype, the CLIPS domain expert systems are Acoustics, Lighting, Structural, and Thermal; the compiled C domain experts are the CAD system and the User Interface.

  6. Operationalizing the Space Weather Modeling Framework: Challenges and Resolutions

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Gombosi, T. I.; Toth, G.; Singer, H. J.; Millward, G. H.; Balch, C. C.; Cash, M. D.

    2016-12-01

    Predicting ground-based magnetic perturbations is a critical step towards specifying and predicting geomagnetically induced currents (GICs) in high voltage transmission lines. Currently, the Space Weather Modeling Framework (SWMF), a flexible modeling framework for simulating the multi-scale space environment, is being transitioned from research to operational use (R2O) by NOAA's Space Weather Prediction Center. Upon completion of this transition, the SWMF will provide localized time-varying magnetic field (dB/dt) predictions using real-time solar wind observations from L1 and the F10.7 proxy for EUV as model input. This presentation chronicles the challenges encountered during the R2O transition of the SWMF. Because operations relies on frequent calculations of global surface dB/dt, new optimizations were required to keep the model running faster than real time. Additionally, several singular situations arose during the 30-day robustness test that required immediate attention. Solutions and strategies for overcoming these issues will be presented. This includes new failsafe options for code execution, new physics and coupling parameters, and the development of an automated validation suite that allows us to monitor performance with code evolution. Finally, the operations-to-research (O2R) impact on SWMF-related research is presented. The lessons learned from this work are valuable and instructive for the space weather community as further R2O progress is made.

  7. MONITORING AND ASSESSING THE CONDITION OF AQUATIC RESOURCES: ROLE OF COMPLEX SURVEY DESIGN AND ANALYSIS

    EPA Science Inventory

    The National Water Quality Monitoring Council (NWQMC) developed a common framework for aquatic resource monitoring. The framework is described in a series of articles published in Water Resources IMPACT, September, 2003. One objective of the framework is to encourage consistenc...

  8. Flexible Delivery. Will a Client Focus System Mean Better Learning?

    ERIC Educational Resources Information Center

    Misko, Josie

    This paper outlines and examines the implications of the main points of the national framework for flexible delivery of vocational education in Australia's technical and further education (TAFE) colleges. Endorsed by the National TAFE Chief Executives Committee in 1992, the framework establishes specific plans of action to be achieved by 1995. The…

  9. Apollo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckingsal, David; Gamblin, Todd

    Modern performance portability frameworks provide application developers with a flexible way to determine how to run application kernels, however, they provide no guidance as to the best configuration for a given kernel. Apollo provides a model-generation framework that, when integrated with the RAJA library, uses lightweight decision tree models to select the fastest execution configuration on a per-kernel basis

  10. 76 FR 18505 - Fisheries of the Northeastern United States; Northeast Skate Complex Fishery; Framework Adjustment 1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-04

    ... alternatives for each of three principal management measures: (1) The primary possession limit affecting the... Management Plan (Skate FMP). Framework Adjustment 1 was developed by the New England Fishery Management... (IRFA), are available on request from Paul J. Howard, Executive Director, New England Fishery Management...

  11. Specification and Error Pattern Based Program Monitoring

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Johnson, Scott; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2001-01-01

    We briefly present Java PathExplorer (JPAX), a tool developed at NASA Ames for monitoring the execution of Java programs. JPAX can be used not only during program testing to reveal subtle errors, but also can be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program in order to properly observe its execution. The instrumentation can be either at the bytecode level or at the source level when the source code is available. JPaX is an instance of a more general project, called PathExplorer (PAX), which is a basis for experiments rather than a fixed system, capable of monitoring various programming languages and experimenting with other logics and analysis techniques

  12. Checkpoint triggering in a computer system

    DOEpatents

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  13. The extended fronto-striatal model of obsessive compulsive disorder: convergence from event-related potentials, neuropsychology and neuroimaging

    PubMed Central

    Melloni, Margherita; Urbistondo, Claudia; Sedeño, Lucas; Gelormini, Carlos; Kichic, Rafael; Ibanez, Agustin

    2012-01-01

    In this work, we explored convergent evidence supporting the fronto-striatal model of obsessive-compulsive disorder (FSMOCD) and the contribution of event-related potential (ERP) studies to this model. First, we considered minor modifications to the FSMOCD model based on neuroimaging and neuropsychological data. We noted the brain areas most affected in this disorder -anterior cingulate cortex (ACC), basal ganglia (BG), and orbito-frontal cortex (OFC) and their related cognitive functions, such as monitoring and inhibition. Then, we assessed the ERPs that are directly related to the FSMOCD, including the error-related negativity (ERN), N200, and P600. Several OCD studies present enhanced ERN and N2 responses during conflict tasks as well as an enhanced P600 during working memory (WM) tasks. Evidence from ERP studies (especially regarding ERN and N200 amplitude enhancement), neuroimaging and neuropsychological findings suggests abnormal activity in the OFC, ACC, and BG in OCD patients. Moreover, additional findings from these analyses suggest dorsolateral prefrontal and parietal cortex involvement, which might be related to executive function (EF) deficits. Thus, these convergent results suggest the existence of a self-monitoring imbalance involving inhibitory deficits and executive dysfunctions. OCD patients present an impaired ability to monitor, control, and inhibit intrusive thoughts, urges, feelings, and behaviors. In the current model, this imbalance is triggered by an excitatory role of the BG (associated with cognitive or motor actions without volitional control) and inhibitory activity of the OFC as well as excessive monitoring of the ACC to block excitatory impulses. This imbalance would interact with the reduced activation of the parietal-DLPC network, leading to executive dysfunction. ERP research may provide further insight regarding the temporal dynamics of action monitoring and executive functioning in OCD. PMID:23015786

  14. Application of Executable Architecture in Early Concept Evaluation using the DoD Architecture Framework

    DTIC Science & Technology

    2016-09-15

    7 Methodology Overview ................................................................................................7...32 III. Methodology ...33 Overview of Research Methodology ..........................................................................34 Implementation of Methodology

  15. Combining qualitative and quantitative spatial and temporal information in a hierarchical structure: Approximate reasoning for plan execution monitoring

    NASA Technical Reports Server (NTRS)

    Hoebel, Louis J.

    1993-01-01

    The problem of plan generation (PG) and the problem of plan execution monitoring (PEM), including updating, queries, and resource-bounded replanning, have different reasoning and representation requirements. PEM requires the integration of qualitative and quantitative information. PEM is the receiving of data about the world in which a plan or agent is executing. The problem is to quickly determine the relevance of the data, the consistency of the data with respect to the expected effects, and if execution should continue. Only spatial and temporal aspects of the plan are addressed for relevance in this work. Current temporal reasoning systems are deficient in computational aspects or expressiveness. This work presents a hybrid qualitative and quantitative system that is fully expressive in its assertion language while offering certain computational efficiencies. In order to proceed, methods incorporating approximate reasoning using hierarchies, notions of locality, constraint expansion, and absolute parameters need be used and are shown to be useful for the anytime nature of PEM.

  16. The Further Development of the Conceptual Model and Operational Dimensions of the AASA National Academy for School Executives. Final Report.

    ERIC Educational Resources Information Center

    Curtis, William H.; And Others

    The main purpose of this project was to develop a blueprint for the future growth of the AASA-National Academy for School Executives. The resulting comprehensive model is displayed in outline form through the use of a conceptual framework that includes three major processes -- program planning and development, implementation, and evaluation. Each…

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.

  18. How music training enhances working memory: a cerebrocerebellar blending mechanism that can lead equally to scientific discovery and therapeutic efficacy in neurological disorders.

    PubMed

    Vandervert, Larry

    2015-01-01

    Following in the vein of studies that concluded that music training resulted in plastic changes in Einstein's cerebral cortex, controlled research has shown that music training (1) enhances central executive attentional processes in working memory, and (2) has also been shown to be of significant therapeutic value in neurological disorders. Within this framework of music training-induced enhancement of central executive attentional processes, the purpose of this article is to argue that: (1) The foundational basis of the central executive begins in infancy as attentional control during the establishment of working memory, (2) In accordance with Akshoomoff, Courchesne and Townsend's and Leggio and Molinari's cerebellar sequence detection and prediction models, the rigors of volitional control demands of music training can enhance voluntary manipulation of information in thought and movement, (3) The music training-enhanced blending of cerebellar internal models in working memory as can be experienced as intuition in scientific discovery (as Einstein often indicated) or, equally, as moments of therapeutic advancement toward goals in the development of voluntary control in neurological disorders, and (4) The blending of internal models as in (3) thus provides a mechanism by which music training enhances central executive processes in working memory that can lead to scientific discovery and improved therapeutic outcomes in neurological disorders. Within the framework of Leggio and Molinari's cerebellar sequence detection model, it is determined that intuitive steps forward that occur in both scientific discovery and during therapy in those with neurological disorders operate according to the same mechanism of adaptive error-driven blending of cerebellar internal models. It is concluded that the entire framework of the central executive structure of working memory is a product of the cerebrocerebellar system which can, through the learning of internal models, incorporate the multi-dimensional rigor and volitional-control demands of music training and, thereby, enhance voluntary control. It is further concluded that this cerebrocerebellar view of the music training-induced enhancement of central executive control in working memory provides a needed mechanism to explain both the highest level of scientific discovery and the efficacy of music training in the remediation of neurological impairments.

  19. VOLUNTEER ESTUARY MONITORING: A METHOD MANUAL

    EPA Science Inventory

    Executive Summary: This manual focuses on volunteer estuary monitoring. As concern over the well-being of the environment has increased during the past couple of decades, volunteer monitoring has become an integral part of the effort to assess the health of our nation’s waters. G...

  20. A Conceptual Framework for Monitoring Children's Services. Discussion Draft.

    ERIC Educational Resources Information Center

    Fiene, Richard

    This discussion draft of a conceptual framework for monitoring children's services was prepared by Peat, Marwick and Co. for the Children's Services Monitoring Transfer Consortium (CFMCS), an organization spanning five states: California, Michigan, Pennsylvania, Texas, and West Virginia. The primary purpose of this conceptual framework was to…

  1. Apparent impact: the hidden cost of one-shot trades

    NASA Astrophysics Data System (ADS)

    Mastromatteo, Iacopo

    2015-06-01

    We study the problem of the execution of a moderate size order in an illiquid market within the framework of a solvable Markovian model. We suppose that in order to avoid impact costs, a trader decides to execute her order through a unique trade, waiting for enough liquidity to accumulate at the best quote. We find that despite the absence of a proper price impact, such trader faces an execution cost arising from a non-vanishing correlation among volume at the best quotes and price changes. We characterize analytically the statistics of the execution time and its cost by mapping the problem to the simpler one of calculating a set of first-passage probabilities on a semi-infinite strip. We finally argue that price impact cannot be completely avoided by conditioning the execution of an order to a more favorable liquidity scenario.

  2. Helicopter In-Flight Monitoring System Second Generation (HIMS II).

    DTIC Science & Technology

    1983-08-01

    acquisition cycle. B. Computer Chassis CPU (DEC LSI-II/2) -- Executes instructions contained in the memory. 32K memory (DEC MSVII-DD) --Contains program...when the operator executes command #2, 3, or 5 (display data). New cartridges can be inserted as required for truly unlimited, continuous data...is called bootstrapping. The software, which is stored on a tape cartridge, is loaded into memory by execution of a small program stored in read-only

  3. Robust, Multi-layered Plan Execution and Revision for Operation of a Network of Communication Antennas

    NASA Technical Reports Server (NTRS)

    Chien, S. A.; Hill, R. W., Jr.; Govindjee, A.; Wang, X.; Estlin, T.; Griesel, M. A.; Lam, R.; Fayyad, K. V.

    1996-01-01

    This paper describes a hierarchical scheduling, planning, control, and execution monitoring architecture for automating operations of a worldwide network of communications antennas. The purpose of this paper is to describe an architecture for automating the process of capturing spacecraft data.

  4. A framework for air quality monitoring based on free public data and open source tools

    NASA Astrophysics Data System (ADS)

    Nikolov, Hristo; Borisova, Denitsa

    2014-10-01

    In the recent years more and more widely accepted by the Space agencies (e.g. NASA, ESA) is the policy toward provision of Earth observation (EO) data and end products concerning air quality especially in large urban areas without cost to researchers and SMEs. Those EO data are complemented by increasing amount of in-situ data also provided at no cost either from national authorities or having crowdsourced origin. This accessibility together with the increased processing capabilities of the free and open source software is a prerequisite for creation of solid framework for air modeling in support of decision making at medium and large scale. Essential part of this framework is web-based GIS mapping tool responsible for dissemination of the output generated. In this research an attempt is made to establish a running framework based solely on openly accessible data on air quality and on set of freely available software tools for processing and modeling taking into account the present status quo in Bulgaria. Among the primary sources of data, especially for bigger urban areas, for different types of gases and dust particles, noted should be the National Institute of Meteorology and Hydrology of Bulgaria (NIMH) and National System for Environmental Monitoring managed by Bulgarian Executive Environmental Agency (ExEA). Both authorities provide data for concentration of several gases just to mention CO, CO2, NO2, SO2, and fine suspended dust (PM10, PM2.5) on monthly (for some data on daily) basis. In the framework proposed these data will complement the data from satellite-based sensors such as OMI instrument aboard EOS-Aura satellite and from TROPOMI instrument payload for future ESA Sentinel-5P mission. Integral part of the framework is the modern map for the land use/land cover which is provided from EEA by initiative GIO Land CORINE. This map is also a product from EO data distributed at European level. First and above all, our effort is focused on provision to the wider public living in urbanized areas with one reliable source of information on the present conditions concerning the air quality. Also this information might be used as indicator for presence of acid rains in agriculture areas close to industrial or electricity plants. Its availability at regular basis makes such information valuable source in case of manmade industrial disasters or incidents such as forest fires. Key issue in developing this framework is to ensure the delivery of reliable data products related to air quality at larger scale that those available at the moment.

  5. Monitoring, metacognition, and executive function: elucidating the role of self-reflection in the development of self-regulation.

    PubMed

    Lyons, Kristen E; Zelazo, Philip David

    2011-01-01

    While an abundance of research has investigated the development of the automatic and controlled processes through which individuals control their thoughts, emotions, and actions, less research has emphasized the role of the self in self-regulation. This chapter synthesizes four literatures that have examined the mechanisms through which the individual acts in a managerial role, evaluating the current status of the system and initiating regulatory actions as necessary. Taken together, these literatures (on executive function, error monitoring, metacognition, and uncertainty monitoring) suggest that self-reflection plays a critical role in self-regulation, and that developmental improvements in self-reflection (via increasing levels of conscious awareness and enhanced calibration of monitoring systems) may serve as driving forces underlying developmental improvement (and temperamental individual differences) in children's ability to control their thoughts and actions.

  6. Preliminary geologic framework developed for a proposed environmental monitoring study of a deep, unconventional Marcellus Shale drill site, Washington County, Pennsylvania

    USGS Publications Warehouse

    Stamm, Robert G.

    2018-06-08

    BackgroundIn the fall of 2011, the U.S. Geological Survey (USGS) was afforded an opportunity to participate in an environmental monitoring study of the potential impacts of a deep, unconventional Marcellus Shale hydraulic fracturing site. The drill site of the prospective case study is the “Range Resources MCC Partners L.P. Units 1-5H” location (also referred to as the “RR–MCC” drill site), located in Washington County, southwestern Pennsylvania. Specifically, the USGS was approached to provide a geologic framework that would (1) provide geologic parameters for the proposed area of a localized groundwater circulation model, and (2) provide potential information for the siting of both shallow and deep groundwater monitoring wells located near the drill pad and the deviated drill legs.The lead organization of the prospective case study of the RR–MCC drill site was the Groundwater and Ecosystems Restoration Division (GWERD) of the U.S. Environmental Protection Agency. Aside from the USGS, additional partners/participants were to include the Department of Energy, the Pennsylvania Geological Survey, the Pennsylvania Department of Environmental Protection, and the developer Range Resources LLC. During the initial cooperative phase, GWERD, with input from the participating agencies, drafted a Quality Assurance Project Plan (QAPP) that proposed much of the objectives, tasks, sampling and analytical procedures, and documentation of results.Later in 2012, the proposed cooperative agreement between the aforementioned partners and the associated land owners for a monitoring program at the drill site was not executed. Therefore, the prospective case study of the RR–MCC site was terminated and no installation of groundwater monitoring wells nor the collection of nearby soil, stream sediment, and surface-water samples were made.Prior to the completion of the QAPP and termination of the perspective case study the geologic framework was rapidly conducted and nearly completed. This was done for three principal reasons. First, there was an immediate need to know the distribution of the relatively undisturbed surface to near-surface bedrock geology and unconsolidated materials for the collection of baseline surface data prior to drill site development (drill pad access road, drill pad leveling) and later during monitoring associated with well drilling, well development, and well production. Second, it was necessary to know the bedrock geology to support the siting of: (1) multiple shallow groundwater monitoring wells (possibly as many as four) surrounding and located immediately adjacent to the drill pad, and (2) deep groundwater monitoring wells (possibly two) located at distance from the drill pad with one possibly being sited along one of the deviated production drill legs. Lastly, the framework geology would provide the lateral extent, thickness, lithology, and expected discontinuities of geologic units (to be parsed or grouped as hydrostratigraphic units) and regional structure trends as inputs into the groundwater model.This report provides the methodology of geologic data accumulation and aggregation, and its integration into a geographic information system (GIS) based program. The GIS program will allow multiple data to be exported in various formats (shapefiles [.shp], database files [.dbf], and Keyhole Markup Language files [.KML]) for use in surface and subsurface geologic site characterization, for sampling strategies, and for inputs for groundwater modeling.

  7. How do emotion and motivation direct executive control?

    PubMed Central

    Pessoa, Luiz

    2009-01-01

    Emotion and motivation have crucial roles in determining human behavior. Yet, how they interact with cognitive control functions is less understood. Here, the basic elements of a conceptual framework for understanding how they interact are introduced. More broadly, the `dual competition' framework proposes that emotion and motivation affect both perceptual and executive competition. In particular, the anterior cingulate cortex is hypothesized to be engaged in attentional/effortful control mechanisms and to interact with several other brain structures, including the amygdala and nucleus accumbens, in integrating affectively significant signals with control signals in prefrontal cortex. An implication of the proposal is that emotion and motivation can either enhance or impair behavioral performance depending on how they interact with control functions. PMID:19285913

  8. Framework for Service Composition in G-Lite

    NASA Astrophysics Data System (ADS)

    Goranova, R.

    2011-11-01

    G-Lite is a Grid middleware, currently the main middleware installed on all clusters in Bulgaria. The middleware is used by scientists for solving problems, which require a large amount of storage and computational resources. On the other hand, the scientists work with complex processes, where job execution in Grid is just a step of the process. That is why, it is strategically important g-Lite to provide a mechanism for service compositions and business process management. Such mechanism is not specified yet. In this article we propose a framework for service composition in g-Lite. We discuss business process modeling, deployment and execution in this Grid environment. The examples used to demonstrate the concept are based on some IBM products.

  9. Relative Contributions of Goal Representation and Kinematic Information to Self-Monitoring by Chimpanzees and Humans

    ERIC Educational Resources Information Center

    Kaneko, Takaaki; Tomonaga, Masaki

    2012-01-01

    It is important to monitor feedback related to the intended result of an action while executing that action. This monitoring process occurs hierarchically; that is, sensorimotor processing occurs at a lower level, and conceptual representation of action goals occurs at a higher level. Although the hierarchical nature of self-monitoring may derive…

  10. Unstructured medical image query using big data - An epilepsy case study.

    PubMed

    Istephan, Sarmad; Siadat, Mohammad-Reza

    2016-02-01

    Big data technologies are critical to the medical field which requires new frameworks to leverage them. Such frameworks would benefit medical experts to test hypotheses by querying huge volumes of unstructured medical data to provide better patient care. The objective of this work is to implement and examine the feasibility of having such a framework to provide efficient querying of unstructured data in unlimited ways. The feasibility study was conducted specifically in the epilepsy field. The proposed framework evaluates a query in two phases. In phase 1, structured data is used to filter the clinical data warehouse. In phase 2, feature extraction modules are executed on the unstructured data in a distributed manner via Hadoop to complete the query. Three modules have been created, volume comparer, surface to volume conversion and average intensity. The framework allows for user-defined modules to be imported to provide unlimited ways to process the unstructured data hence potentially extending the application of this framework beyond epilepsy field. Two types of criteria were used to validate the feasibility of the proposed framework - the ability/accuracy of fulfilling an advanced medical query and the efficiency that Hadoop provides. For the first criterion, the framework executed an advanced medical query that spanned both structured and unstructured data with accurate results. For the second criterion, different architectures were explored to evaluate the performance of various Hadoop configurations and were compared to a traditional Single Server Architecture (SSA). The surface to volume conversion module performed up to 40 times faster than the SSA (using a 20 node Hadoop cluster) and the average intensity module performed up to 85 times faster than the SSA (using a 40 node Hadoop cluster). Furthermore, the 40 node Hadoop cluster executed the average intensity module on 10,000 models in 3h which was not even practical for the SSA. The current study is limited to epilepsy field and further research and more feature extraction modules are required to show its applicability in other medical domains. The proposed framework advances data-driven medicine by unleashing the content of unstructured medical data in an efficient and unlimited way to be harnessed by medical experts. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Intelligent control and adaptive systems; Proceedings of the Meeting, Philadelphia, PA, Nov. 7, 8, 1989

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Editor)

    1990-01-01

    Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.

  12. A Framework for Integrating Knowledge Management with Risk Management for Information Technology Projects (RiskManiT)

    ERIC Educational Resources Information Center

    Karadsheh, Louay A.

    2010-01-01

    This research focused on the challenges experienced when executing risk management activities for information technology projects. The lack of adequate knowledge management support of risk management activities has caused many project failures in the past. The research objective was to propose a conceptual framework of the Knowledge-Based Risk…

  13. Safe driving and executive functions in healthy middle-aged drivers.

    PubMed

    León-Domínguez, Umberto; Solís-Marcos, Ignacio; Barrio-Álvarez, Elena; Barroso Y Martín, Juan Manuel; León-Carrión, José

    2017-01-01

    The introduction of the point system driver's license in several European countries could offer a valid framework for evaluating driving skills. This is the first study to use this framework to assess the functional integrity of executive functions in middle-aged drivers with full points, partial points or no points on their driver's license (N = 270). The purpose of this study is to find differences in executive functions that could be determinants in safe driving. Cognitive tests were used to assess attention processes, processing speed, planning, cognitive flexibility, and inhibitory control. Analyses for covariance (ANCOVAS) were used for group comparisons while adjusting for education level. The Bonferroni method was used for correcting for multiple comparisons. Overall, drivers with the full points on their license showed better scores than the other two groups. In particular, significant differences were found in reaction times on Simple and Conditioned Attention tasks (both p-values < 0.001) and in number of type-III errors on the Tower of Hanoi task (p = 0.026). Differences in reaction time on attention tasks could serve as neuropsychological markers for safe driving. Further analysis should be conducted in order to determine the behavioral impact of impaired executive functioning on driving ability.

  14. Monitored execution of robot plans produced by STRIPS.

    NASA Technical Reports Server (NTRS)

    Fikes, R. E.

    1972-01-01

    We describe PLANEX1, a plan executor for the Stanford Research Institute robot system. The problem-solving program STRIPS creates a plan consisting of a sequence of actions, and PLANEX1 program carries out the plan by executing the actions. PLANEX1 is designed so that it executes only that portion of the plan necessary for completing the task, reexecutes any portion of the plan that has failed to achieve the desired results, and initiates replanning in situations where the plan can no longer be effective in completing the task. The scenario for an example plan execution is given.

  15. Age-related differences in listening effort during degraded speech recognition

    PubMed Central

    Ward, Kristina M.; Shen, Jing; Souza, Pamela E.; Grieco-Calub, Tina M.

    2016-01-01

    Objectives The purpose of the current study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Design Twenty-five younger adults (18–24 years) and twenty-one older adults (56–82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants’ responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners’ performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (baseline vs. dual task); and (3) per group (younger vs. older adults). Results Speech recognition declined with increasing spectral degradation for both younger and older adults when they performed the task in isolation or concurrently with the visual monitoring task. Older adults were slower and less accurate than younger adults on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared to single-task performance, older adults experienced greater declines in secondary-task accuracy, but not reaction time, than younger adults. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. Conclusions Older adults experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than younger adults. These findings are interpreted as suggesting that older listeners expended greater listening effort than younger listeners, and may be partially attributed to age-related differences in executive control. PMID:27556526

  16. Review on the Implementation of the Islamic Republic of Iran about Tobacco Control, Based on MPOWER, in the Framework Convention on Tobacco Control by the World Health Organization.

    PubMed

    Alimohammadi, Mahmood; Jafari-Mansoorian, Hossein; Hashemi, Seyed Yaser; Momenabadi, Victoria; Ghasemi, Seyed Mehdi; Karimyan, Kamaladdin

    2017-07-01

    Smoking is the largest preventable cause of death in the world, killing nearly 6 million people annually. This article is an investigation of measures implemented laws in the Iran to study the proposed strategy of control and reduce tobacco use based on the monitor, protect, offer, warn, enforce and raise (MPOWER) policy. All laws approved by the Parliament along with the instructions on tobacco control prepared by the Ministry of Health and Medical Education, Ministry of Industry, Mine and Trade were collected and studied. Moreover, practical steps of Ministry of Health and other organizations were examined in this regard. Iranian Parliament after the adoption of the Framework Convention on Tobacco Control (FCTC) acts to create a comprehensive and systematic program for tobacco control legislation as a first step towards comprehensive national tobacco control and combat. In this law and its implementing guidelines and based on the strategy of MPOWER, specific implement is done to monitor tobacco use and prevention policies, protect people from tobacco smoke, offer help to quit tobacco use, warn about the dangers of tobacco, enforce bans on tobacco advertising, promotion and sponsorship and raise taxes on tobacco. However, the full objectives of the legislation have not achieved yet. According to Iran's membership in the FCTC and executive producer of tobacco control laws and regulations, necessary infrastructure is ready for a serious fight with tobacco use. In Iran, in comparison with developed countries, there is a huge gap between ratified laws and performing of laws.

  17. The role of ecological monitoring in managing wilderness

    Treesearch

    Peter B. Landres

    1995-01-01

    Good management requires good information. Monitoring provides this information when it is structured into the process of management, well designed and executed. As federal and state agencies strive to implement a management paradigm based on sustaining ecosystems, ecological information becomes a vital part of managing natural resources. Inventory and monitoring...

  18. Long term monitoring of moisture under pavements : executive summary report.

    DOT National Transportation Integrated Search

    2010-01-01

    The research : program consisted of three : distinct : activities. The first activity was a continuation of : the monitoring of environmental instrumentation : under select pavement sections constructed by the : Ohio Department of Transportation (ODO...

  19. An evidence-based structure for transformative nurse executive practice: the model of the interrelationship of leadership, environments, and outcomes for nurse executives (MILE ONE).

    PubMed

    Adams, Jeffrey M; Erickson, Jeanette Ives; Jones, Dorothy A; Paulo, Lisa

    2009-01-01

    Identifying and measuring success within the chief nurse executive (CNE) population have proven complex and challenging for nurse executive educators, policy makers, practitioners, researchers, theory developers, and their constituents. The model of the interrelationship of leadership, environments, and outcomes for nurse executives (MILE ONE) was developed using the concept of consilience (jumping together of ideas) toward limiting the ambiguity surrounding CNE success. The MILE ONE is unique in that it links existing evidence and identifies the continuous and dependent interrelationship among 3 content areas: (1) CNE; (2) nurses' professional practice and work environments; and (3) patient and organizational outcomes. The MILE ONE was developed to operationalize nurse executive influence, define measurement of CNE success, and provide a framework to articulate for patient, workforce, and organizational outcome improvement efforts. This article describes the MILE ONE and highlights the evidence base structure used in its development.

  20. Executive turnover: the influence of dispersion and other pay system characteristics.

    PubMed

    Messersmith, Jake G; Guthrie, James P; Ji, Yong-Yeon; Lee, Jeong-Yeon

    2011-05-01

    Using tournament theory as a guiding theoretical framework, in this study, we assess the organizational implications of pay dispersion and other pay system characteristics on the likelihood of turnover among individual executives in organizational teams. Specifically, we estimate the effect of these pay system characteristics on executive turnover decisions. We use a multi-industry, multilevel data set composed of executives in publicly held firms to assess the effects of pay dispersion at the individual level. Consistent with previous findings, we find that pay dispersion is associated with an increased likelihood of executive turnover. In addition, we find that other pay characteristics also affect turnover, both directly and through a moderating effect on pay dispersion. Turnover is more likely when executives receive lower portions of overall top management team compensation and when they have more pay at risk. These conditions also moderate the relationship between pay dispersion and individual turnover decisions, as does receiving lower compensation relative to the market.

  1. Threat interferes with response inhibition.

    PubMed

    Hartikainen, Kaisa M; Siiskonen, Anna R; Ogawa, Keith H

    2012-05-09

    A potential threat, such as a spider, captures attention and engages executive functions to adjust ongoing behavior and avoid danger. We and many others have reported slowed responses to neutral targets in the context of emotional distractors. This behavioral slowing has been explained in the framework of attentional competition for limited resources with emotional stimuli prioritized. Alternatively, slowed performance could reflect the activation of avoidance/freezing-type motor behaviors associated with threat. Although the interaction of attention and emotion has been widely studied, little is known on the interaction between emotion and executive functions. We studied how threat-related stimuli (spiders) interact with executive performance and whether the interaction profile fits with a resource competition model or avoidance/freezing-type motor behaviors. Twenty-one young healthy individuals performed a Go-NoGo visual discrimination reaction time (RT) task engaging several executive functions with threat-related and emotionally neutral distractors. The threat-related distractors had no effect on the RT or the error rate in the Go trials. The NoGo error rate, reflecting failure in response inhibition, increased significantly because of threat-related distractors in contrast to neutral distractors, P less than 0.05. Thus, threat-related distractors temporarily impaired response inhibition. Threat-related distractors associated with increased commission errors and no effect on RT does not suggest engagement of avoidance/freezing-type motor behaviors. The results fit in the framework of the resource competition model. A potential threat calls for evaluation of affective significance as well as inhibition of undue emotional reactivity. We suggest that these functions tax executive resources and may render other executive functions, such as response inhibition, temporarily compromised when the demands for resources exceed availability.

  2. Stereotype threat and executive resource depletion: examining the influence of emotion regulation.

    PubMed

    Johns, Michael; Inzlicht, Michael; Schmader, Toni

    2008-11-01

    Research shows that stereotype threat reduces performance by diminishing executive resources, but less is known about the psychological processes responsible for these impairments. The authors tested the idea that targets of stereotype threat try to regulate their emotions and that this regulation depletes executive resources, resulting in underperformance. Across 4 experiments, they provide converging evidence that targets of stereotype threat spontaneously attempt to control their expression of anxiety and that such emotion regulation depletes executive resources needed to perform well on tests of cognitive ability. They also demonstrate that providing threatened individuals with a means to effectively cope with negative emotions--by reappraising the situation or the meaning of their anxiety--can restore executive resources and improve test performance. They discuss these results within the framework of an integrated process model of stereotype threat, in which affective and cognitive processes interact to undermine performance.

  3. Model-based Executive Control through Reactive Planning for Autonomous Rovers

    NASA Technical Reports Server (NTRS)

    Finzi, Alberto; Ingrand, Felix; Muscettola, Nicola

    2004-01-01

    This paper reports on the design and implementation of a real-time executive for a mobile rover that uses a model-based, declarative approach. The control system is based on the Intelligent Distributed Execution Architecture (IDEA), an approach to planning and execution that provides a unified representational and computational framework for an autonomous agent. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting agents, each with the same fundamental structure. We show that planning and real-time response are compatible if the executive minimizes the size of the planning problem. We detail the implementation of this approach on an exploration rover (Gromit an RWI ATRV Junior at NASA Ames) presenting different IDEA controllers of the same domain and comparing them with more classical approaches. We demonstrate that the approach is scalable to complex coordination of functional modules needed for autonomous navigation and exploration.

  4. Stereotype Threat and Executive Resource Depletion: Examining the Influence of Emotion Regulation

    PubMed Central

    Johns, Michael; Inzlicht, Michael; Schmader, Toni

    2010-01-01

    Research shows that stereotype threat reduces performance by diminishing executive resources, but less is known about the psychological processes responsible for these impairments. The authors tested the idea that targets of stereotype threat try to regulate their emotions and that this regulation depletes executive resources, resulting in underperformance. Across 4 experiments, they provide converging evidence that targets of stereotype threat spontaneously attempt to control their expression of anxiety and that such emotion regulation depletes executive resources needed to perform well on tests of cognitive ability. They also demonstrate that providing threatened individuals with a means to effectively cope with negative emotions—by reappraising the situation or the meaning of their anxiety—can restore executive resources and improve test performance. They discuss these results within the framework of an integrated process model of stereotype threat, in which affective and cognitive processes interact to undermine performance. PMID:18999361

  5. Monitoring Java Programs with Java PathExplorer

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2001-01-01

    We present recent work on the development Java PathExplorer (JPAX), a tool for monitoring the execution of Java programs. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program's late code which will then omit events to an observer during its execution. The observer checks the events against user provided high level requirement specifications, for example temporal logic formulae, and against lower level error detection procedures, for example concurrency related such as deadlock and data race algorithms. High level requirement specifications together with their underlying logics are defined in the Maude rewriting logic, and then can either be directly checked using the Maude rewriting engine, or be first translated to efficient data structures and then checked in Java.

  6. Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    We present views and analysis of the execution of several PVM (Parallel Virtual Machine) codes for Computational Fluid Dynamics on a networks of Sparcstations, including: (1) NAS Parallel Benchmarks CG and MG; (2) a multi-partitioning algorithm for NAS Parallel Benchmark SP; and (3) an overset grid flowsolver. These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains: (1) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (2) Monitor, a library of runtime trace-collection routines; (3) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (4) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran 77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses XIIR5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (1) the impact of long message latencies; (2) the impact of multiprogramming overheads and associated load imbalance; (3) cache and virtual-memory effects; and (4) significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (1) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets: and (2) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.

  7. Validation of the SWMF Magnetosphere: Fields and Particles

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Ridley, A. J.

    2009-05-01

    The Space Weather Modeling Framework has been developed at the University of Michigan to allow many independent space environment numerical models to be executed simultaneously and coupled together to create a more accurate, all-encompassing system. This work explores the capabilities of the framework when using the BATS-R-US MHD code, Rice Convection Model (RCM), the Ridley Ionosphere Model (RIM), and the Polar Wind Outflow Model (PWOM). Ten space weather events, ranging from quiet to extremely stormy periods, are modeled by the framework. All simulations are executed in a manner that mimics an operational environment where fewer resources are available and predictions are required in a timely manner. The results are compared against in-situ measurements of magnetic fields from GOES, Polar, Geotail, and Cluster satellites as well as MPA particle measurements from the LANL geosynchronous spacecraft. Various metrics are calculated to quantify performance. Results when using only two to all four components are compared to evaluate the increase in performance as new physics are included in the system.

  8. Instrumentation, performance visualization, and debugging tools for multiprocessors

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.; Hontalas, Philip J.

    1991-01-01

    The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessor architectures. However, without effective means to monitor (and visualize) program execution, debugging, and tuning parallel programs becomes intractably difficult as program complexity increases with the number of processors. Research on performance evaluation tools for multiprocessors is being carried out at ARC. Besides investigating new techniques for instrumenting, monitoring, and presenting the state of parallel program execution in a coherent and user-friendly manner, prototypes of software tools are being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Our current tool set, the Ames Instrumentation Systems (AIMS), incorporates features from various software systems developed in academia and industry. The execution of FORTRAN programs on the Intel iPSC/860 can be automatically instrumented and monitored. Performance data collected in this manner can be displayed graphically on workstations supporting X-Windows. We have successfully compared various parallel algorithms for computational fluid dynamics (CFD) applications in collaboration with scientists from the Numerical Aerodynamic Simulation Systems Division. By performing these comparisons, we show that performance monitors and debuggers such as AIMS are practical and can illuminate the complex dynamics that occur within parallel programs.

  9. Threat facilitates subsequent executive control during anxious mood.

    PubMed

    Birk, Jeffrey L; Dennis, Tracy A; Shin, Lisa M; Urry, Heather L

    2011-12-01

    Dual competition framework (DCF) posits that low-level threat may facilitate behavioral performance by influencing executive control functions. Anxiety is thought to strengthen this effect by enhancing threat's affective significance. To test these ideas directly, we examined the effects of low-level threat and experimentally induced anxiety on one executive control function, the efficiency of response inhibition. In Study 1, briefly presented stimuli that were mildly threatening (i.e., fearful faces) relative to nonthreatening (i.e., neutral faces) led to facilitated executive control efficiency during experimentally induced anxiety. No such effect was observed during an equally arousing, experimentally induced happy mood state. In Study 2, we assessed the effects of low-level threat, experimentally induced anxiety, and individual differences in trait anxiety on executive control efficiency. Consistent with Study 1, fearful relative to neutral faces led to facilitated executive control efficiency during experimentally induced anxiety. No such effect was observed during an experimentally induced neutral mood state. Moreover, individual differences in trait anxiety did not moderate the effects of threat and anxiety on executive control efficiency. The findings are partially consistent with the predictions of DCF in that low-level threat improved executive control, at least during a state of anxiety. (c) 2011 APA, all rights reserved.

  10. Monitoring and modeling of pavement response and performance : executive summary report.

    DOT National Transportation Integrated Search

    2010-06-01

    Objective: : Over the years, the Ohio Department of Transportation has : constructed several pavements with a range of designs and : materials to study and improve overall statewide : performance. These pavements require constant monitoring : to dete...

  11. Monitoring with Data Automata

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2014-01-01

    We present a form of automaton, referred to as data automata, suited for monitoring sequences of data-carrying events, for example emitted by an executing software system. This form of automata allows states to be parameterized with data, forming named records, which are stored in an efficiently indexed data structure, a form of database. This very explicit approach differs from other automaton-based monitoring approaches. Data automata are also characterized by allowing transition conditions to refer to other parameterized states, and by allowing transitions sequences. The presented automaton concept is inspired by rule-based systems, especially the Rete algorithm, which is one of the well-established algorithms for executing rule-based systems. We present an optimized external DSL for data automata, as well as a comparable unoptimized internal DSL (API) in the Scala programming language, in order to compare the two solutions. An evaluation compares these two solutions to several other monitoring systems.

  12. Planning for execution monitoring on a planetary rover

    NASA Technical Reports Server (NTRS)

    Gat, Erann; Firby, R. James; Miller, David P.

    1990-01-01

    A planetary rover will be traversing largely unknown and often unknowable terrain. In addition to geometric obstacles such as cliffs, rocks, and holes, it may also have to deal with non-geometric hazards such as soft soil and surface breakthroughs which often cannot be detected until rover is in imminent danger. Therefore, the rover must monitor its progress throughout a traverse, making sure to stay on course and to detect and act on any previously unseen hazards. Its onboard planning system must decide what sensors to monitor, what landmarks to take position readings from, and what actions to take if something should go wrong. The planning systems being developed for the Pathfinder Planetary Rover to perform these execution monitoring tasks are discussed. This system includes a network of planners to perform path planning, expectation generation, path analysis, sensor and reaction selection, and resource allocation.

  13. Constructing Flexible, Configurable, ETL Pipelines for the Analysis of "Big Data" with Apache OODT

    NASA Astrophysics Data System (ADS)

    Hart, A. F.; Mattmann, C. A.; Ramirez, P.; Verma, R.; Zimdars, P. A.; Park, S.; Estrada, A.; Sumarlidason, A.; Gil, Y.; Ratnakar, V.; Krum, D.; Phan, T.; Meena, A.

    2013-12-01

    A plethora of open source technologies for manipulating, transforming, querying, and visualizing 'big data' have blossomed and matured in the last few years, driven in large part by recognition of the tremendous value that can be derived by leveraging data mining and visualization techniques on large data sets. One facet of many of these tools is that input data must often be prepared into a particular format (e.g.: JSON, CSV), or loaded into a particular storage technology (e.g.: HDFS) before analysis can take place. This process, commonly known as Extract-Transform-Load, or ETL, often involves multiple well-defined steps that must be executed in a particular order, and the approach taken for a particular data set is generally sensitive to the quantity and quality of the input data, as well as the structure and complexity of the desired output. When working with very large, heterogeneous, unstructured or semi-structured data sets, automating the ETL process and monitoring its progress becomes increasingly important. Apache Object Oriented Data Technology (OODT) provides a suite of complementary data management components called the Process Control System (PCS) that can be connected together to form flexible ETL pipelines as well as browser-based user interfaces for monitoring and control of ongoing operations. The lightweight, metadata driven middleware layer can be wrapped around custom ETL workflow steps, which themselves can be implemented in any language. Once configured, it facilitates communication between workflow steps and supports execution of ETL pipelines across a distributed cluster of compute resources. As participants in a DARPA-funded effort to develop open source tools for large-scale data analysis, we utilized Apache OODT to rapidly construct custom ETL pipelines for a variety of very large data sets to prepare them for analysis and visualization applications. We feel that OODT, which is free and open source software available through the Apache Software Foundation, is particularly well suited to developing and managing arbitrary large-scale ETL processes both for the simplicity and flexibility of its wrapper framework, as well as the detailed provenance information it exposes throughout the process. Our experience using OODT to manage processing of large-scale data sets in domains as diverse as radio astronomy, life sciences, and social network analysis demonstrates the flexibility of the framework, and the range of potential applications to a broad array of big data ETL challenges.

  14. Path planning and execution monitoring for a planetary rover

    NASA Technical Reports Server (NTRS)

    Gat, Erann; Slack, Marc G.; Miller, David P.; Firby, R. James

    1990-01-01

    A path planner and an execution monitoring planner that will enable the rover to navigate to its various destinations safely and correctly while detecting and avoiding hazards are described. An overview of the complete architecture is given. Implementation and testbeds are described. The robot can detect unforseen obstacles and take appropriate action. This includes having the rover back away from the hazard and mark the area as untraversable in the in the rover's internal map. The experiments have consisted of paths roughly 20 m in length. The architecture works with a large variety of rover configurations with different kinematic constraints.

  15. Knowledge assistant: A sensor fusion framework for robotic environmental characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feddema, J.T.; Rivera, J.J.; Tucker, S.D.

    1996-12-01

    A prototype sensor fusion framework called the {open_quotes}Knowledge Assistant{close_quotes} has been developed and tested on a gantry robot at Sandia National Laboratories. This Knowledge Assistant guides the robot operator during the planning, execution, and post analysis stages of the characterization process. During the planning stage, the Knowledge Assistant suggests robot paths and speeds based on knowledge of sensors available and their physical characteristics. During execution, the Knowledge Assistant coordinates the collection of data through a data acquisition {open_quotes}specialist.{close_quotes} During execution and post analysis, the Knowledge Assistant sends raw data to other {open_quotes}specialists,{close_quotes} which include statistical pattern recognition software, a neuralmore » network, and model-based search software. After the specialists return their results, the Knowledge Assistant consolidates the information and returns a report to the robot control system where the sensed objects and their attributes (e.g. estimated dimensions, weight, material composition, etc.) are displayed in the world model. This paper highlights the major components of this system.« less

  16. Work stealing for GPU-accelerated parallel programs in a global address space framework: WORK STEALING ON GPU-ACCELERATED SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain.« less

  17. Work stealing for GPU-accelerated parallel programs in a global address space framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram

    Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain« less

  18. Benefit of the doubt: a new view of the role of the prefrontal cortex in executive functioning and decision making

    PubMed Central

    Asp, Erik; Manzel, Kenneth; Koestner, Bryan; Denburg, Natalie L.; Tranel, Daniel

    2013-01-01

    The False Tagging Theory (FTT) is a neuroanatomical model of belief and doubt processes that proposes a single, unique function for the prefrontal cortex. Here, we review evidence pertaining to the FTT, the implications of the FTT regarding fractionation of the prefrontal cortex, and the potential benefits of the FTT for new neuroanatomical conceptualizations of executive functions. The FTT provides a parsimonious account that may help overcome theoretical problems with prefrontal cortex mediated executive control such as the homunculus critique. Control in the FTT is examined via the “heuristics and biases” psychological framework for human judgment. The evidence indicates that prefrontal cortex mediated doubting is at the core of executive functioning and may explain some biases of intuitive judgments. PMID:23745103

  19. 5 CFR 630.301 - Annual leave accrual and accumulation-Senior Executive Service, Senior-Level, and Scientific and...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... legal authority, for planning, monitoring, developing, evaluating, and rewarding employee performance...-Senior Executive Service, Senior-Level, and Scientific and Professional Employees. 630.301 Section 630...-Level, and Scientific and Professional Employees. (a) Annual leave accrues at the rate of 1 day (8 hours...

  20. 5 CFR 630.301 - Annual leave accrual and accumulation-Senior Executive Service, Senior-Level, and Scientific and...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... legal authority, for planning, monitoring, developing, evaluating, and rewarding employee performance...-Senior Executive Service, Senior-Level, and Scientific and Professional Employees. 630.301 Section 630...-Level, and Scientific and Professional Employees. (a) Annual leave accrues at the rate of 1 day (8 hours...

  1. 5 CFR 630.301 - Annual leave accrual and accumulation-Senior Executive Service, Senior-Level, and Scientific and...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... legal authority, for planning, monitoring, developing, evaluating, and rewarding employee performance...-Senior Executive Service, Senior-Level, and Scientific and Professional Employees. 630.301 Section 630...-Level, and Scientific and Professional Employees. (a) Annual leave accrues at the rate of 1 day (8 hours...

  2. 5 CFR 630.301 - Annual leave accrual and accumulation-Senior Executive Service, Senior-Level, and Scientific and...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... legal authority, for planning, monitoring, developing, evaluating, and rewarding employee performance...-Senior Executive Service, Senior-Level, and Scientific and Professional Employees. 630.301 Section 630...-Level, and Scientific and Professional Employees. (a) Annual leave accrues at the rate of 1 day (8 hours...

  3. Storage Capacity Explains Fluid Intelligence but Executive Control Does Not

    ERIC Educational Resources Information Center

    Chuderski, Adam; Taraday, Maciej; Necka, Edward; Smolen, Tomasz

    2012-01-01

    We examined whether fluid intelligence (Gf) is better predicted by the storage capacity of active memory or by the effectiveness of executive control. In two psychometric studies, we measured storage capacity with three kinds of task which required the maintenance of a visual array, the monitoring of simple relations among perceptually available…

  4. Method for resource control in parallel environments using program organization and run-time support

    NASA Technical Reports Server (NTRS)

    Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)

    2001-01-01

    A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.

  5. Method for resource control in parallel environments using program organization and run-time support

    NASA Technical Reports Server (NTRS)

    Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)

    1999-01-01

    A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.

  6. Business Case Analysis for the Versatile Depot Automated Test Station Used in the USAF Warner Robins Air Logistics Center Maintenance Depot

    DTIC Science & Technology

    2008-06-01

    executes the avionics test) can run on the new ATS thus creating the common ATS framework . The system will also enable numerous new functional...Enterprise-level architecture that reflects corporate DoD priorities and requirements for business systems, and provides a common framework to ensure that...entire Business Mission Area (BMA) of the DoD. The BEA also contains a set of integrated Department of Defense Architecture Framework (DoDAF

  7. Establishing a Strong Foundation: District and School-Level Supports for Classroom Implementation of the LDC and MDC Frameworks. Executive Summary

    ERIC Educational Resources Information Center

    Reumann-Moore, Rebecca; Lawrence, Nancy; Sanders, Felicia; Christman, Jolley Bruce; Duffy, Mark

    2011-01-01

    The Bill and Melinda Gates Foundation has invested in the development and dissemination of high-quality instructional and formative assessment tools to support teachers' incorporation of the Core Common State Standards (CCSS) into their classroom instruction. Literacy experts have developed a framework and a set of templates that teachers can use…

  8. 75 FR 38779 - Nomination of Existing Marine Protected Areas to the National System of Marine Protected Areas

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-06

    ... managing agencies to fill key conservation gaps in important ocean areas. DATES: Comments on the... conservation objectives of the Framework. Executive Order 13158 defines an MPA as: ``any area of the marine... term MPA as defined in the Framework refers only to the marine portion of a site (below the mean high...

  9. A hierarchical competing systems model of the emergence and early development of executive function

    PubMed Central

    Marcovitch, Stuart; Zelazo, Philip David

    2010-01-01

    The hierarchical competing systems model (HCSM) provides a framework for understanding the emergence and early development of executive function – the cognitive processes underlying the conscious control of behavior – in the context of search for hidden objects. According to this model, behavior is determined by the joint influence of a developmentally invariant habit system and a conscious representational system that becomes increasingly influential as children develop. This article describes a computational formalization of the HCSM, reviews behavioral and computational research consistent with the model, and suggests directions for future research on the development of executive function. PMID:19120405

  10. Whole systems shared governance: a model for the integrated health system.

    PubMed

    Evan, K; Aubry, K; Hawkins, M; Curley, T A; Porter-O'Grady, T

    1995-05-01

    The healthcare system is under renovation and renewal. In the process, roles and structures are shifting to support a subscriber-based continuum of care. Alliances and partnerships are emerging as the models of integration for the future. But how do we structure to support these emerging integrated partnerships? As the nurse executive expands the role and assumes increasing responsibility for creating new frameworks for care, a structure that sustains the point-of-care innovations and interdisciplinary relationships must be built. Whole systems models of organization, such as shared governance, are expanding as demand grows for a sustainable structure for horizontal and partnered systems of healthcare delivery. The executive will have to apply these newer frameworks to the delivery of care to provide adequate support for the clinically integrated environment.

  11. A Characterization of Individual Differences in Prospective Memory Monitoring Using the Complex Ongoing Serial Task

    ERIC Educational Resources Information Center

    Savine, Adam C.; McDaniel, Mark A.; Shelton, Jill Talley; Scullin, Michael K.

    2012-01-01

    Prospective memory--remembering to retrieve and execute future goals--is essential to daily life. Prospective remembering is often achieved through effortful monitoring; however, potential individual differences in monitoring patterns have not been characterized. We propose 3 candidate models to characterize the individual differences present in…

  12. Coordinating the Cognitive Processes of Writing: The Role of the Monitor

    ERIC Educational Resources Information Center

    Quinlan, Thomas; Loncke, Maaike; Leijten, Marielle; Van Waes, Luuk

    2012-01-01

    Moment to moment, a writer faces a host of potential problems. How does the writer's mind coordinate this problem solving? In the original Hayes and Flower model, the authors posited a distinct process to manage this coordinating--that is, the "monitor." The monitor became responsible for executive function in writing. In two…

  13. Executive Processes, Memory Accuracy, and Memory Monitoring: An Aging and Individual Difference Analysis

    ERIC Educational Resources Information Center

    Rhodes, M.G.; Kelley, C.M.

    2005-01-01

    The current study examined the neuropsychological correlates of memory accuracy in older and younger adults. Participants were tested in a memory monitoring paradigm developed by Koriat and Goldsmith (1996), which permits separate assessments of the accuracy of responses generated during retrieval and the accuracy of monitoring those responses.…

  14. Second language proficiency modulates conflict-monitoring in an oculomotor Stroop task: evidence from Hindi-English bilinguals

    PubMed Central

    Singh, Niharika; Mishra, Ramesh K.

    2013-01-01

    Many studies have confirmed the presence of a bilingual advantage which is manifested as enhanced cognitive and attention control. However, very few studies have investigated the role of second language proficiency on the modulation of conflict-monitoring in bilinguals. We investigated this by comparing high and low proficient Hindi-English bilinguals on a modified saccadic arrow Stroop task under different monitoring conditions, and tested the predictions of the bilingual executive control advantage proposal. The task of the participants was to make an eye movement toward the color patch in the same color as the central arrow, ignoring the patch to which the arrow was pointing. High-proficient bilinguals had overall faster saccade latency on all types of trials as compared to the low proficient bilinguals. The overall saccadic latency for high proficiency bilinguals was similarly affected by the different types of monitoring conditions, whereas conflict resolution advantage was found only for high monitoring demanding condition. The results support a conflict-monitoring account in a novel oculomotor task and also suggest that language proficiency could modulate executive control in bilinguals. PMID:23781210

  15. A criteria and indicators monitoring framework for food forestry embedded in the principles of ecological restoration.

    PubMed

    Park, Hyeone; Higgs, Eric

    2018-02-02

    Food forestry is a burgeoning practice in North America, representing a strong multifunctional approach that combines agriculture, forestry, and ecological restoration. The Galiano Conservancy Association (GCA), a community conservation, restoration, and educational organization on Galiano Island, British Columbia in Canada, recently has created two food forests on their protected forested lands: one with primarily non-native species and the other comprising native species. These projects, aimed at food production, education, and promotion of local food security and sustainability, are also intended to contribute to the overall ecological integrity of the landscape. Monitoring is essential for assessing how effectively a project is meeting its goal and thus informing its adaptive management. Yet, presently, there are no comprehensive monitoring frameworks for food forestry available. To fill this need, this study developed a generic Criteria and Indicators (C&I) monitoring framework for food forestry, embedded in ecological restoration principles, by employing qualitative content analysis of 61 literature resources and semi-structured interviews with 16 experts in the fields of food forestry and ecological restoration. The generic C&I framework comprises 14 criteria, 39 indicators, and 109 measures and is intended to guide a comprehensive and systematic assessment for food forest projects. The GCA adapted the generic C&I framework to develop a customized monitoring framework. The Galiano C&I monitoring framework has comprehensive suite of monitoring parameters, which are collectively address multiple values and goals.

  16. An Overview of the Runtime Verification Tool Java PathExplorer

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present an overview of the Java PathExplorer runtime verification tool, in short referred to as JPAX. JPAX can monitor the execution of a Java program and check that it conforms with a set of user provided properties formulated in temporal logic. JPAX can in addition analyze the program for concurrency errors such as deadlocks and data races. The concurrency analysis requires no user provided specification. The tool facilitates automated instrumentation of a program's bytecode, which when executed will emit an event stream, the execution trace, to an observer. The observer dispatches the incoming event stream to a set of observer processes, each performing a specialized analysis, such as the temporal logic verification, the deadlock analysis and the data race analysis. Temporal logic specifications can be formulated by the user in the Maude rewriting logic, where Maude is a high-speed rewriting system for equational logic, but here extended with executable temporal logic. The Maude rewriting engine is then activated as an event driven monitoring process. Alternatively, temporal specifications can be translated into efficient automata, which check the event stream. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.

  17. 78 FR 21715 - Sexual Assault Prevention and Response (SAPR) Program Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-11

    ... high-risk team to monitor cases where the sexual assault victim's life and safety may be in jeopardy... in Military Rule of Evidence 514. (9) Requires the execution of a high-risk team to monitor cases...

  18. Real-time long term measurement using integrated framework for ubiquitous smart monitoring

    NASA Astrophysics Data System (ADS)

    Heo, Gwanghee; Lee, Giu; Lee, Woosang; Jeon, Joonryong; Kim, Pil-Joong

    2007-04-01

    Ubiquitous monitoring combining internet technologies and wireless communication is one of the most promising technologies of infrastructure health monitoring against the natural of man-made hazards. In this paper, an integrated framework of the ubiquitous monitoring is developed for real-time long term measurement in internet environment. This framework develops a wireless sensor system based on Bluetooth technology and sends measured acceleration data to the host computer through TCP/IP protocol. And it is also designed to respond to the request of web user on real time basis. In order to verify this system, real time monitoring tests are carried out on a prototype self-anchored suspension bridge. Also, wireless measurement system is analyzed to estimate its sensing capacity and evaluate its performance for monitoring purpose. Based on the evaluation, this paper proposes the effective strategies for integrated framework in order to detect structural deficiencies and to design an early warning system.

  19. Application of the Multicontextual Approach in Promoting Learning and Transfer of Strategy Use in an Individual with TBI and Executive Dysfunction.

    PubMed

    Toglia, Joan; Goverover, Yael; Johnston, Mark V; Dain, Barry

    2011-01-01

    The multicontext approach addresses strategy use and self-monitoring skills within activities and contexts that are systematically varied to facilitate transfer of learning. This article illustrates the application of the multicontext approach by presenting a case study of an adult who is 5 years post-traumatic brain injury with executive dysfunction and limited awareness. A single case study design with repeated pre-post measures was used. Methods to monitor strategy generation and specific awareness within intervention are described. Findings suggest improved functional performance and generalization of use of an external strategy despite absence of changes in general self-awareness of deficits. This case describes the multicontext intervention process and provides clinical suggestions for working with individuals with serious deficits in awareness and executive dysfunction following traumatic brain injury. Copyright 2011, SLACK Incorporated.

  20. The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows.

    PubMed

    Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P; Zijdenbos, Alex P; Evans, Alan C

    2012-01-01

    The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources.

  1. The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows

    PubMed Central

    Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P.; Zijdenbos, Alex P.; Evans, Alan C.

    2012-01-01

    The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources. PMID:22493575

  2. Building oceanographic and atmospheric observation networks by composition: unmanned vehicles, communication networks, and planning and execution control frameworks

    NASA Astrophysics Data System (ADS)

    Sousa, J. T.; Pinto, J.; Martins, R.; Costa, M.; Ferreira, F.; Gomes, R.

    2014-12-01

    The problem of developing mobile oceanographic and atmospheric observation networks (MOAO) with coordinated air and ocean vehicles is discussed in the framework of the communications and control software tool chain developed at Underwater Systems and Technologies Laboratory (LSTS) from Porto University. This is done with reference to field experiments to illustrate key capabilities and to assess future MOAO operations. First, the motivation for building MOAO by "composition" of air and ocean vehicles, communication networks, and planning and execution control frameworks is discussed - in networked vehicle systems information and commands are exchanged among multiple vehicles and operators, and the roles, relative positions, and dependencies of these vehicles and operators change during operations. Second, the planning and execution control framework developed at LSTS for multi-vehicle systems is discussed with reference to key concepts such as autonomy, mixed-initiative interactions, and layered organization. Third, the LSTS tool software tool chain is presented to show how to develop MOAO by composition. The tool chain comprises the Neptus command and control framework for mixed initiative interactions, the underlying IMC messaging protocol, and the DUNE on-board software. Fourth, selected LSTS operational deployments illustrate MOAO capability building. In 2012 we demonstrated the use of UAS to "ferry" data from UUVs located beyond line of sight (BLOS). In 2013 we demonstrated coordinated observations of coastal fronts with small UAS and UUVs, "bent" BLOS through the use of UAS as communication relays, and UAS tracking of juvenile hammer-head sharks. In 2014 we demonstrated UUV adaptive sampling with the closed loop controller of the UUV residing on a UAS; this was done with the help of a Wave Glider ASV with a communications gateway. The results from these experiments provide a background for assessing potential future UAS operations in a compositional MOAO.

  3. Exploring Two Approaches for an End-to-End Scientific Analysis Workflow

    DOE PAGES

    Dodelson, Scott; Kent, Steve; Kowalkowski, Jim; ...

    2015-12-23

    The advance of the scientific discovery process is accomplished by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different components on resources that are best suited to run those components. In addition, we need to monitor the status of the workflow as components get scheduled and executed, and to access the intermediate and final output for visual exploration and analysis. Finally,more » it is important for scientists to be able to share their workflows with collaborators. Moreover we have explored two approaches for such an analysis framework for the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC), the first one is based on the use and extension of Galaxy, a web-based portal for biomedical research, and the second one is based on a programming language, Python. In our paper, we present a brief description of the two approaches, describe the kinds of extensions to the Galaxy system we have found necessary in order to support the wide variety of scientific analysis in the cosmology community, and discuss how similar efforts might be of benefit to the HEP community.« less

  4. An Acquisition Guide for Executives

    EPA Pesticide Factsheets

    This guide covers the following subjects; What is Acquisition?, Purpose and Primary Functions of the Agency’s Acquisition System, Key Organizations in Acquisitions, Legal Framework, Key Players in Acquisitions, Acquisition Process, Acquisition Thresholds

  5. Coordination of load response instrumentation of SHRP pavements, Ohio University : executive summary, May 1999.

    DOT National Transportation Integrated Search

    1999-05-01

    Sensors were installed in 18 test sections to continuously monitor temperature, moisture, and frost within the pavement structure, and 33 test sections were instrumented to monitor strain, deflection and pressure generated by environmental cycling an...

  6. Fourteen Points: A Framework for the Analysis of Counterinsurgency,

    DTIC Science & Technology

    1984-07-31

    1963, Strategic Hamlets I-1 J Vietnam 1966-1971 J-1 BIBLIOGRAPHY BIB-i vi THE BDM CORPORATION EXECUTIVE SUMMARY The purpose of this study is to provide... leader - ship are. faulty pay. and supply. systems, a brutal political regime and .mistreatment -of soldiers. by their officers. - A well-executed...their leaders and in all the other qualities necessary for success. The analyst must therefore develop a clear picture of all important aspects of the

  7. Utah Educational Quality Indicators. The Sixth in the Report Series: "How Good Are Utah Public Schools." Executive Summary.

    ERIC Educational Resources Information Center

    Nelson, David E.

    For nearly 20 years, Utah's Office of Education has been systematically monitoring the academic performance and other characteristics of Utah's students. This executive summary, an overview of the sixth major report since 1967, examines several measures describing educational quality in Utah schools. The first section covers students' achievement…

  8. Teachers' Understanding of the Role of Executive Functions in Mathematics Learning

    ERIC Educational Resources Information Center

    Gilmore, Camilla; Cragg, Lucy

    2014-01-01

    Cognitive psychology research has suggested an important role for executive functions, the set of skills that monitor and control thought and action, in learning mathematics. However, there is currently little evidence about whether teachers are aware of the importance of these skills and, if so, how they come by this information. We conducted an…

  9. Specification and Preliminary Validation of IAT (Integrated Analysis Techniques) Methods: Executive Summary.

    DTIC Science & Technology

    1985-03-01

    conceptual framwork , and preliminary validation of IAT concepts. Planned work for FY85, including more extensive validation, is also described. 20...Developments: Required Capabilities .... ......... 10 2-1 IAT Conceptual Framework - FY85 (FEO) ..... ........... 11 2-2 Recursive Nature of Decomposition...approach: 1) Identify needs & requirements for IAT. 2) Develop IAT conceptual framework. 3) Validate IAT methods. 4) Develop applications materials. To

  10. 78 FR 40243 - Self-Regulatory Organizations; ICE Clear Europe Limited; Notice of Filing of Amendment No. 3 and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-03

    ... framework. \\17\\ Id. Operational Resources. ICE Clear Europe believes it will have the operational and... governance framework. \\33\\ 15 U.S.C. 78q-1(b)(3)(F). IV. Conclusion On the basis of the foregoing, the... having a U.S. residence, based upon the location of its executive office or principal place of business...

  11. Leveraging Data Analysis for Domain Experts: An Embeddable Framework for Basic Data Science Tasks

    ERIC Educational Resources Information Center

    Lohrer, Johannes-Y.; Kaltenthaler, Daniel; Kröger, Peer

    2016-01-01

    In this paper, we describe a framework for data analysis that can be embedded into a base application. Since it is important to analyze the data directly inside the application where the data is entered, a tool that allows the scientists to easily work with their data, supports and motivates the execution of further analysis of their data, which…

  12. Tool Integration Framework for Bio-Informatics

    DTIC Science & Technology

    2007-04-01

    Java NetBeans [11] based Integrated Development Environment (IDE) for developing modules and packaging computational tools. The framework is extremely...integrate an Eclipse front-end for Desktop Integration. Eclipse was chosen over Netbeans owing to a higher acceptance, better infrastructure...5.0. This version of Dashboard ran with NetBeans IDE 3.6 requiring Java Runtime 1.4 on a machine with Windows XP. The toolchain is executed by

  13. Developing Software For Monitoring And Diagnosis

    NASA Technical Reports Server (NTRS)

    Edwards, S. J.; Caglayan, A. K.

    1993-01-01

    Expert-system software shell produces executable code. Report discusses beginning phase of research directed toward development of artificial intelligence for real-time monitoring of, and diagnosis of faults in, complicated systems of equipment. Motivated by need for onboard monitoring and diagnosis of electronic sensing and controlling systems of advanced aircraft. Also applicable to such equipment systems as refineries, factories, and powerplants.

  14. Episodic feeling-of-knowing accuracy and cued recall in the elderly: evidence for double dissociation involving executive functioning and processing speed.

    PubMed

    Perrotin, Audrey; Isingrini, Michel; Souchay, Céline; Clarys, David; Taconnat, Laurence

    2006-05-01

    This research investigated adult age differences in a metamemory monitoring task-episodic feeling-of-knowing (FOK) and in an episodic memory task-cued recall. Executive functioning and processing speed were examined as mediators of these age differences. Young and elderly adults were administered an episodic FOK task, a cued recall task, executive tests and speed tests. Age-related decline was observed on all the measures. Correlation analyses revealed a pattern of double dissociation which indicates a specific relationship between executive score and FOK accuracy, and between speed score and cued recall. When executive functioning and processing speed were evaluated concurrently on FOK and cued recall variables, hierarchical regression analyses showed that executive score was a better mediator of age-related variance in FOK, and that speed score was the better mediator of age-related variance in cued recall.

  15. An integrative architecture for general intelligence and executive function revealed by lesion mapping

    PubMed Central

    Colom, Roberto; Solomon, Jeffrey; Krueger, Frank; Forbes, Chad; Grafman, Jordan

    2012-01-01

    Although cognitive neuroscience has made remarkable progress in understanding the involvement of the prefrontal cortex in executive control, the broader functional networks that support high-level cognition and give rise to general intelligence remain to be well characterized. Here, we investigated the neural substrates of the general factor of intelligence (g) and executive function in 182 patients with focal brain damage using voxel-based lesion–symptom mapping. The Wechsler Adult Intelligence Scale and Delis–Kaplan Executive Function System were used to derive measures of g and executive function, respectively. Impaired performance on these measures was associated with damage to a distributed network of left lateralized brain areas, including regions of frontal and parietal cortex and white matter association tracts, which bind these areas into a coordinated system. The observed findings support an integrative framework for understanding the architecture of general intelligence and executive function, supporting their reliance upon a shared fronto-parietal network for the integration and control of cognitive representations and making specific recommendations for the application of the Wechsler Adult Intelligence Scale and Delis–Kaplan Executive Function System to the study of high-level cognition in health and disease. PMID:22396393

  16. Metadata and network API aspects of a framework for storing and retrieving civil infrastructure monitoring data

    NASA Astrophysics Data System (ADS)

    Wong, John-Michael; Stojadinovic, Bozidar

    2005-05-01

    A framework has been defined for storing and retrieving civil infrastructure monitoring data over a network. The framework consists of two primary components: metadata and network communications. The metadata component provides the descriptions and data definitions necessary for cataloging and searching monitoring data. The communications component provides Java classes for remotely accessing the data. Packages of Enterprise JavaBeans and data handling utility classes are written to use the underlying metadata information to build real-time monitoring applications. The utility of the framework was evaluated using wireless accelerometers on a shaking table earthquake simulation test of a reinforced concrete bridge column. The NEESgrid data and metadata repository services were used as a backend storage implementation. A web interface was created to demonstrate the utility of the data model and provides an example health monitoring application.

  17. Work-related stress risk assessment in Italy: a methodological proposal adapted to regulatory guidelines.

    PubMed

    Persechino, Benedetta; Valenti, Antonio; Ronchetti, Matteo; Rondinone, Bruna Maria; Di Tecco, Cristina; Vitali, Sara; Iavicoli, Sergio

    2013-06-01

    Work-related stress is one of the major causes of occupational ill health. In line with the regulatory framework on occupational health and safety (OSH), adequate models for assessing and managing risk need to be identified so as to minimize the impact of this stress not only on workers' health, but also on productivity. After close analysis of the Italian and European reference regulatory framework and work-related stress assessment and management models used in some European countries, we adopted the UK Health and Safety Executive's (HSE) Management Standards (MS) approach, adapting it to the Italian context in order to provide a suitable methodological proposal for Italy. We have developed a work-related stress risk assessment strategy, meeting regulatory requirements, now available on a specific web platform that includes software, tutorials, and other tools to assist companies in their assessments. This methodological proposal is new on the Italian work-related stress risk assessment scene. Besides providing an evaluation approach using scientifically validated instruments, it ensures the active participation of occupational health professionals in each company. The assessment tools provided enable companies not only to comply with the law, but also to contribute to a database for monitoring and assessment and give access to a reserved area for data analysis and comparisons.

  18. Work-Related Stress Risk Assessment in Italy: A Methodological Proposal Adapted to Regulatory Guidelines

    PubMed Central

    Persechino, Benedetta; Valenti, Antonio; Ronchetti, Matteo; Rondinone, Bruna Maria; Di Tecco, Cristina; Vitali, Sara; Iavicoli, Sergio

    2013-01-01

    Background Work-related stress is one of the major causes of occupational ill health. In line with the regulatory framework on occupational health and safety (OSH), adequate models for assessing and managing risk need to be identified so as to minimize the impact of this stress not only on workers' health, but also on productivity. Methods After close analysis of the Italian and European reference regulatory framework and work-related stress assessment and management models used in some European countries, we adopted the UK Health and Safety Executive's (HSE) Management Standards (MS) approach, adapting it to the Italian context in order to provide a suitable methodological proposal for Italy. Results We have developed a work-related stress risk assessment strategy, meeting regulatory requirements, now available on a specific web platform that includes software, tutorials, and other tools to assist companies in their assessments. Conclusion This methodological proposal is new on the Italian work-related stress risk assessment scene. Besides providing an evaluation approach using scientifically validated instruments, it ensures the active participation of occupational health professionals in each company. The assessment tools provided enable companies not only to comply with the law, but also to contribute to a database for monitoring and assessment and give access to a reserved area for data analysis and comparisons. PMID:23961332

  19. Bivalent rLP2086 (Trumenba®): Development of a well-characterized vaccine through commercialization.

    PubMed

    Sunasara, Khurram; Cundy, John; Srinivasan, Sriram; Evans, Brad; Sun, Weiqiang; Cook, Scott; Bortell, Eric; Farley, John; Griffin, Daniel; Bailey Piatchek, Michele; Arch-Douglas, Katherine

    2018-05-24

    The phrase "Process is the Product" is often applied to biologics, including multicomponent vaccines composed of complex components that evade complete characterization. Vaccine production processes must be defined and locked early in the development cycle to ensure consistent quality of the vaccine throughout scale-up, clinical studies, and commercialization. This approach of front-loading the development work helped facilitate the accelerated approval of the Biologic License Application for the well-characterized vaccine bivalent rLP2086 (Trumenba®, Pfizer Inc) in 2014 under Breakthrough Therapy Designation. Bivalent rLP2086 contains two rLP2086 antigens and is licensed for the prevention of meningococcal meningitis disease caused by Neisseria meningitidis serogroup B in individuals 10-25years of age in the United States. This paper discusses the development of the manufacturing process of the two antigens for the purpose of making it amenable to any manufacturing facility. For the journey to commercialization, the operating model used to manage this highly accelerated program led to a framework that ensured "right the first time" execution, robust process characterization, and proactive process monitoring. This framework enabled quick problem identification and proactive resolutions, resulting in a robust control strategy for the commercial process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Design and implementation of an air monitoring program in support of a brownfields redevelopment program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maisel, B.E.; Hunt, G.T.; Devaney, R.J. Jr.

    EPA`s Brownfields Economic Redevelopment Initiative has sparked renewal of industrial and commercial parcels otherwise idled or under-utilized because of real or perceived environmental contamination. In certain cases, restoring such parcels to productive economic use requires a redevelopment effort protective of human health and welfare through minimizing offsite migration of environmental contaminants during cleanup, demolition and remediation activities. To support these objectives, an air monitoring program is often required as an integral element of a comprehensive brownfields redevelopment effort. This paper presents a strategic framework for design and execution of an ambient air monitoring program in support of a brownfields remediationmore » effort ongoing in Lawrence, MA. Based on site characterization, the program included sample collection and laboratory analysis of ambient air samples for polychlorinated biphenyls (PCBs), polychlorinated dibenzodioxins and polychlorinated dibenzofurans (PCDDs/PCDFs), total suspended particulate (TSP), inhalable particulate (PM10), and lead. The program included four monitoring phases, identified as background, wintertime, demolition/remediation and post-demolition. Air sampling occurred over a 16 month period during 1996--97, during which time nine sampling locations were utilized to produce approximately 1,500 ambient air samples. Following strict data review and validation procedures, ambient air data interpretation focused on the following: evaluation of upwind/downwind sample pairs, comparison of ambient levels to existing regulatory standards, relation of ambient levels to data reported in the open literature, and, determination of normal seasonal variations in existing background burden, comparison of ambient levels measured during site activity to background levels.« less

  1. Sex-specific associations of testosterone with prefrontal-hippocampal development and executive function.

    PubMed

    Nguyen, Tuong-Vi; Lew, Jimin; Albaugh, Matthew D; Botteron, Kelly N; Hudziak, James J; Fonov, Vladimir S; Collins, D Louis; Ducharme, Simon; McCracken, James T

    2017-02-01

    Testosterone is thought to play a crucial role in mediating sexual differentiation of brain structures. Examinations of the cognitive effects of testosterone have also shown beneficial and potentially sex-specific effects on executive function and mnemonic processes. Yet these findings remain limited by an incomplete understanding of the critical timing and brain regions most affected by testosterone, the lack of documented links between testosterone-related structural brain changes and cognition, and the difficulty in distinguishing the effects of testosterone from those of related sex steroids such as of estradiol and dehydroepiandrosterone (DHEA). Here we examined associations between testosterone, cortico-hippocampal structural covariance, executive function (Behavior Rating Inventory of Executive Function) and verbal memory (California Verbal Learning Test-Children's Version), in a longitudinal sample of typically developing children and adolescents 6-22 yo, controlling for the effects of estradiol, DHEA, pubertal stage, collection time, age, handedness, and total brain volume. We found prefrontal-hippocampal covariance to vary as a function of testosterone levels, but only in boys. Boys also showed a specific association between positive prefrontal-hippocampal covariance (as seen at higher testosterone levels) and lower performance on specific components of executive function (monitoring the action process and flexibly shifting between actions). We also found the association between testosterone and a specific aspect of executive function (monitoring) to be significantly mediated by prefrontal-hippocampal structural covariance. There were no significant associations between testosterone-related cortico-hippocampal covariance and verbal memory. Taken together, these findings highlight the developmental importance of testosterone in supporting sexual differentiation of the brain and sex-specific executive function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Service and Methods Demonstration Program Annual Report - Executive Summary

    DOT National Transportation Integrated Search

    1978-08-01

    The Urban Mass Transportation Administration (UMTA Service and methods Demonstration (SMD) Program was established in 1974 to provide a consistent and comprehensive framework within which innovative transportation management techniques and transit se...

  3. The Nature and Organization of Individual Differences in Executive Functions: Four General Conclusions

    PubMed Central

    Miyake, Akira; Friedman, Naomi P.

    2012-01-01

    Executive functions (EFs)—a set of general-purpose control processes that regulate one’s thoughts and behaviors—have become a popular research topic lately and have been studied in many subdisciplines of psychological science. This article summarizes the EF research that our group has conducted to understand the nature of individual differences in EFs and their cognitive and biological underpinnings. In the context of a new theoretical framework that we have been developing (the unity/diversity framework), we describe four general conclusions that have emerged from our research. Specifically, we argue that individual differences in EFs, as measured with simple laboratory tasks, (1) show both unity and diversity (different EFs are correlated yet separable); (2) reflect substantial genetic contributions; (3) are related to various clinically and societally important phenomena; and (4) show some developmental stability. PMID:22773897

  4. Parent ratings of executive functioning in children with shunted hydrocephalus.

    PubMed

    Lacy, Maureen; Baldassarre, Megan; Nader, Todd; Frim, David

    2012-01-01

    The present study examined the executive functioning of a group of children with a history of communicating hydrocephalus and how their level of functioning was correlated with parent ratings of executive functioning. The study examined the executive functioning of 39 shunted children with a history of hydrocephalus and 20 healthy peers. Additionally, parents of both groups of children completed the Behavior Rating Inventory of Executive Function (BRIEF) to assess the parents' perceptions of their children's executive functioning. Finally, the study investigated the relationship between the shunted hydrocephalus children's executive functioning and the parent ratings of their executive functioning. Overall, the children with a history of shunted hydrocephalus displayed more executive dysfunction than their healthy peers. These children were rated by their parents as having more executive dysfunction than their healthy peers and displaying working memory, initiation, mental flexibility and self-monitoring difficulties, which appear to increase with age among the shunted hydrocephalus group. While parent ratings as measured by the BRIEF indices did not correlate with all executive tasks within the shunted hydrocephalus group, the cognitive tests assessing mental flexibility may be sensitive to the problems noted by parents at home. The children with a history of shunted hydrocephalus displayed executive functioning deficits on formal examination. The parents of children with a history of shunted hydrocephalus report ongoing executive difficulties which may increase with age. Copyright © 2012 S. Karger AG, Basel.

  5. Monitoring the capacity of working memory: Executive control and effects of listening effort

    PubMed Central

    Amichetti, Nicole M.; Stanley, Raymond S.; White, Alison G.

    2013-01-01

    In two experiments, we used an interruption-and-recall (IAR) task to explore listeners’ ability to monitor the capacity of working memory as new information arrived in real time. In this task, listeners heard recorded word lists with instructions to interrupt the input at the maximum point that would still allow for perfect recall. Experiment 1 demonstrated that the most commonly selected segment size closely matched participants’ memory span, as measured in a baseline span test. Experiment 2 showed that reducing the sound level of presented word lists to a suprathreshold but effortful listening level disrupted the accuracy of matching selected segment sizes with participants’ memory spans. The results are discussed in terms of whether online capacity monitoring may be subsumed under other, already enumerated working memory executive functions (inhibition, set shifting, and memory updating). PMID:23400826

  6. Review on the Implementation of the Islamic Republic of Iran about Tobacco Control, Based on MPOWER, in the Framework Convention on Tobacco Control by the World Health Organization

    PubMed Central

    Alimohammadi, Mahmood; Jafari-Mansoorian, Hossein; Hashemi, Seyed Yaser; Momenabadi, Victoria; Ghasemi, Seyed Mehdi; Karimyan, Kamaladdin

    2017-01-01

    Background Smoking is the largest preventable cause of death in the world, killing nearly 6 million people annually. This article is an investigation of measures implemented laws in the Iran to study the proposed strategy of control and reduce tobacco use based on the monitor, protect, offer, warn, enforce and raise (MPOWER) policy. Methods All laws approved by the Parliament along with the instructions on tobacco control prepared by the Ministry of Health and Medical Education, Ministry of Industry, Mine and Trade were collected and studied. Moreover, practical steps of Ministry of Health and other organizations were examined in this regard. Findings Iranian Parliament after the adoption of the Framework Convention on Tobacco Control (FCTC) acts to create a comprehensive and systematic program for tobacco control legislation as a first step towards comprehensive national tobacco control and combat. In this law and its implementing guidelines and based on the strategy of MPOWER, specific implement is done to monitor tobacco use and prevention policies, protect people from tobacco smoke, offer help to quit tobacco use, warn about the dangers of tobacco, enforce bans on tobacco advertising, promotion and sponsorship and raise taxes on tobacco. However, the full objectives of the legislation have not achieved yet. Conclusion According to Iran’s membership in the FCTC and executive producer of tobacco control laws and regulations, necessary infrastructure is ready for a serious fight with tobacco use. In Iran, in comparison with developed countries, there is a huge gap between ratified laws and performing of laws. PMID:29657699

  7. Nemesis Autonomous Test System

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Lee, Cin-Young; Horvath, Gregory A,; Clement, Bradley J.

    2012-01-01

    A generalized framework has been developed for systems validation that can be applied to both traditional and autonomous systems. The framework consists of an automated test case generation and execution system called Nemesis that rapidly and thoroughly identifies flaws or vulnerabilities within a system. By applying genetic optimization and goal-seeking algorithms on the test equipment side, a "war game" is conducted between a system and its complementary nemesis. The end result of the war games is a collection of scenarios that reveals any undesirable behaviors of the system under test. The software provides a reusable framework to evolve test scenarios using genetic algorithms using an operation model of the system under test. It can automatically generate and execute test cases that reveal flaws in behaviorally complex systems. Genetic algorithms focus the exploration of tests on the set of test cases that most effectively reveals the flaws and vulnerabilities of the system under test. It leverages advances in state- and model-based engineering, which are essential in defining the behavior of autonomous systems. It also uses goal networks to describe test scenarios.

  8. Centralized Command, Distributed Control, and Decentralized Execution - a Command and Control Solution to US Air Force A2/AD Challenges

    DTIC Science & Technology

    2017-04-28

    Regional Air Component Commander (the Leader) 5 CC-DC- DE Solution to A2/AD – Distributed Theater Air Control System (the System) 9 CC-DC- DE ... Control , Decentralized Execution” to a new framework of “Centralized Command, Distributed Control , and Decentralized Execution” (CC-DC- DE ).4 5 This...USAF C2 challenges in A2/AD environments describes a three-part Centralized Command, Distributed Control , and Decentralized Execution (CC-DC- DE

  9. Development of a National Digital Geospatial Data Framework

    USGS Publications Warehouse

    ,

    1995-01-01

    This proposal of a data framework to organize and enhance the activities of the geospatial data community to meet needs for basic themes of data was developed in response to a request in Executive Order 12906, Coordinating Geographic Data Acquisition and Access: The National Spatial Data Infrastructure (U.S. Executive Office of the President, 1994). The request stated: in consultation with State, local, and tribal governments and within 9 months of the date of this order, the FGDC shall submit a plan and schedule to OMB [U.S. Office of Management and Budget] for completing the initial implementation of a national digital geospatial data framework ("framework") by January 2000 and for establishing a process of ongoing data maintenance. The framework shall include geospatial data that are significant, in the determination of the FGDC, to a broad variety of users within any geographic area or nationwide. At a minimum, the plan shall address how the initial transportation, hydrology, and boundary elements of the framework might be completed by January 1998 in order to support the decennial census of 2000. The proposal was developed by representatives of local, regional, State, and Federal agencies under the auspices of the Federal Geographic Data Committee (FGDC). The individuals are listed in the appendix of this report. This Framework Working Group identified the purpose and goals for the framework; identified incentives for participation; defined the information content; developed preliminary technical, operational, and business contexts; specified the institutional roles needed; and developed a strategy for a phased implementation of the framework.Members of the working group presented the concepts of the framework for discussion at several national and regional public meetings. The draft of the report also was provided for public, written review. These discussions and reviews were the source of many improvements to the report.The FGDC approved the report for submission to the Office of Management and Budget on March 31, 1995.

  10. Reasoning, learning, and creativity: frontal lobe function and human decision-making.

    PubMed

    Collins, Anne; Koechlin, Etienne

    2012-01-01

    The frontal lobes subserve decision-making and executive control--that is, the selection and coordination of goal-directed behaviors. Current models of frontal executive function, however, do not explain human decision-making in everyday environments featuring uncertain, changing, and especially open-ended situations. Here, we propose a computational model of human executive function that clarifies this issue. Using behavioral experiments, we show that unlike others, the proposed model predicts human decisions and their variations across individuals in naturalistic situations. The model reveals that for driving action, the human frontal function monitors up to three/four concurrent behavioral strategies and infers online their ability to predict action outcomes: whenever one appears more reliable than unreliable, this strategy is chosen to guide the selection and learning of actions that maximize rewards. Otherwise, a new behavioral strategy is tentatively formed, partly from those stored in long-term memory, then probed, and if competitive confirmed to subsequently drive action. Thus, the human executive function has a monitoring capacity limited to three or four behavioral strategies. This limitation is compensated by the binary structure of executive control that in ambiguous and unknown situations promotes the exploration and creation of new behavioral strategies. The results support a model of human frontal function that integrates reasoning, learning, and creative abilities in the service of decision-making and adaptive behavior.

  11. Reasoning, Learning, and Creativity: Frontal Lobe Function and Human Decision-Making

    PubMed Central

    Collins, Anne; Koechlin, Etienne

    2012-01-01

    The frontal lobes subserve decision-making and executive control—that is, the selection and coordination of goal-directed behaviors. Current models of frontal executive function, however, do not explain human decision-making in everyday environments featuring uncertain, changing, and especially open-ended situations. Here, we propose a computational model of human executive function that clarifies this issue. Using behavioral experiments, we show that unlike others, the proposed model predicts human decisions and their variations across individuals in naturalistic situations. The model reveals that for driving action, the human frontal function monitors up to three/four concurrent behavioral strategies and infers online their ability to predict action outcomes: whenever one appears more reliable than unreliable, this strategy is chosen to guide the selection and learning of actions that maximize rewards. Otherwise, a new behavioral strategy is tentatively formed, partly from those stored in long-term memory, then probed, and if competitive confirmed to subsequently drive action. Thus, the human executive function has a monitoring capacity limited to three or four behavioral strategies. This limitation is compensated by the binary structure of executive control that in ambiguous and unknown situations promotes the exploration and creation of new behavioral strategies. The results support a model of human frontal function that integrates reasoning, learning, and creative abilities in the service of decision-making and adaptive behavior. PMID:22479152

  12. Relationships between Executive Functioning and Homework Difficulties in Students with and without Autism Spectrum Disorder: An Analysis of Student- and Parent-Reports

    ERIC Educational Resources Information Center

    Endedijk, Hinke; Denessen, Eddie; Hendriks, Angelique W.

    2011-01-01

    Despite the fact that homework forms an important cornerstone of student development, many students fail to capitalize on the long-term benefits of doing homework. Several executive skills, including cognitive flexibility, monitoring and planning are suggested as prerequisites for the completion of homework. It follows that homework difficulties…

  13. Language Ability and Verbal and Nonverbal Executive Functioning in Deaf Students Communicating in Spoken English

    ERIC Educational Resources Information Center

    Remine, Maria D.; Care, Esther; Brown, P. Margaret

    2008-01-01

    The internal use of language during problem solving is considered to play a key role in executive functioning. This role provides a means for self-reflection and self-questioning during the formation of rules and plans and a capacity to control and monitor behavior during problem-solving activity. Given that increasingly sophisticated language is…

  14. Hot Executive Function Following Moderate-to-Late Preterm Birth: Altered Delay Discounting at 4 Years of Age

    ERIC Educational Resources Information Center

    Hodel, Amanda S.; Brumbaugh, Jane E.; Morris, Alyssa R.; Thomas, Kathleen M.

    2016-01-01

    Interest in monitoring long-term neurodevelopmental outcomes of children born moderate-to-late preterm (32-36 weeks gestation) is increasing. Moderate-to-late preterm birth has a negative impact on academic achievement, which may relate to differential development of executive function (EF). Prior studies reporting deficits in EF in preterm…

  15. Recovering from execution errors in SIPE

    NASA Technical Reports Server (NTRS)

    Wilkins, D. E.

    1987-01-01

    In real-world domains (e.g., a mobile robot environment), things do not always proceed as planned, so it is important to develop better execution-monitoring techniques and replanning capabilities. These capabilities in the SIPE planning system are described. The motivation behind SIPE is to place enough limitations on the representation so that planning can be done efficiently, while retaining sufficient power to still be useful. This work assumes that new information given to the execution monitor is in the form of predicates, thus avoiding the difficult problem of how to generate these predicates from information provided by sensors. The replanning module presented here takes advantage of the rich structure of SIPE plans and is intimately connected with the planner, which can be called as a subroutine. This allows the use of SIPE's capabilities to determine efficiently how unexpected events affect the plan being executed and, in many cases, to retain most of the original plan by making changes in it to avoid problems caused by these unexpected events. SIPE is also capable of shortening the original plan when serendipitous events occur. A general set of replanning actions is presented along with a general replanning capability that has been implemented by using these actions.

  16. Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Tucker, Deanne (Technical Monitor)

    1994-01-01

    We present views and analysis of the execution of several PVM codes for Computational Fluid Dynamics on a network of Sparcstations, including (a) NAS Parallel benchmarks CG and MG (White, Alund and Sunderam 1993); (b) a multi-partitioning algorithm for NAS Parallel Benchmark SP (Wijngaart 1993); and (c) an overset grid flowsolver (Smith 1993). These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains (a) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (b) Monitor, a library of run-time trace-collection routines; (c) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (d) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses X11R5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (a) the impact of long message latencies; (b) the impact of multiprogramming overheads and associated load imbalance; (c) cache and virtual-memory effects; and (4significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (a) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets; and (b) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.

  17. Teacher Evaluations of Executive Functioning in Schoolchildren Aged 9-12 and the Influence of Age, Sex, Level of Parental Education.

    PubMed

    van Tetering, Marleen A J; Jolles, Jelle

    2017-01-01

    Executive functions (EFs) develop over the period of early childhood and adolescence up until young adulthood. Individual children differ substantially in the pace at which their EFs develop, and characteristics such as sex and the level of parental education (LPE) are thought to contribute to these differences. In the present study, we assessed age-related changes in EFs as perceived and evaluated by teachers and parents as well as the influence of sex and LPE on their evaluations. We used a newly developed observer-report questionnaire, the Amsterdam Executive Function Inventory (AEFI). The AEFI assesses three important components of the executive aspects of daily life behavior in 13 questions: Attention; Self-control and Self-monitoring; and Planning and Initiative taking. Teachers and parents evaluated these aspects of executive functioning in 186 schoolchildren in grades 3-6 (age: 9-12 years). Age effects within grades and differences in social economic status between the four participating schools were controlled. Results showed a significant increase in teacher-perceived EFs from third to fourth grades and from fifth to sixth grades. This development was influenced both by the sex of the child and by the LPE. As perceived by teachers, the component self-control and self-monitoring was higher for girls than for boys, and planning abilities were higher for children from families with a higher LPE. Additional analyses showed that there is a systematic and statistically significant difference between the evaluations of the teachers and that of parents. Parents reported higher scores for planning, whereas teachers reported higher scores for self-control and self-monitoring. Evaluations by parents and teachers were different for girls, but not for boys. These findings are important because they imply that the development of EFs as perceived by parents and teachers is influenced by child-related factors. Second, there are clear differences in evaluations between teachers and parents. The AEFI appears to be a tool that is easily used by parents and teachers and shows potential for monitoring the development of EFs as perceived by significant others during young adolescence.

  18. 24 CFR 954.501 - Grantee responsibilities; written agreements; monitoring.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Grantee responsibilities; written... responsibilities; written agreements; monitoring. (a) Responsibilities. The grantee is responsible for ensuring... contractors does not relieve the grantee of this responsibility. (b) Executing a written agreement. Before...

  19. Evaluation of Response to Intervention Practices for Elementary School Reading. Executive Summary. NCEE 2016-4000

    ERIC Educational Resources Information Center

    Balu, Rekha; Zhu, Pei; Doolittle, Fred; Schiller, Ellen; Jenkins, Joseph; Gersten, Russell

    2015-01-01

    Response to Intervention (RtI) is a framework for collecting and using data to match students to interventions of varying intensity. This study examines the implementation of RtI in Grade 1-3 reading in 13 states during the 2011-12 school year, focusing on 146 schools that were experienced with RtI. Full implementation of the RtI framework in…

  20. Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks

    NASA Technical Reports Server (NTRS)

    Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia

    2017-01-01

    Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.

  1. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  2. The family medicine curriculum resource project structural framework.

    PubMed

    Stearns, Jeffrey A; Stearns, Marjorie A; Davis, Ardis K; Chessman, Alexander W

    2007-01-01

    In the original contract for the Family Medicine Curricular Resource Project (FMCRP), the Health Resources and Services Administration (HRSA), Division of Medicine and Dentistry, charged the FMCRP executive committee with reviewing recent medical education reform proposals and relevant recent curricula to develop an analytical framework for the project. The FMCRP executive and advisory committees engaged in a review and analysis of a variety of curricular reform proposals generated during the last decade of the 20th century. At the same time, in a separate and parallel process, representative individuals from all the family medicine organizations, all levels of learners, internal medicine and pediatric faculty, and the national associations of medical and osteopathic colleges (Association of American Medical Colleges and the American Association of Colleges of Osteopathic Medicine) were involved in group discussions to identify educational needs for physicians practicing in the 21st century. After deliberation, a theoretical framework was chosen for this undergraduate medical education resource that mirrors the Accreditation Council for Graduate Medical Education (ACGME) competencies, a conceptual design originated for graduate medical education. In addition to reflecting the current environment calling for change and greater accountability in medical education, use of the ACGME competencies as the theoretical framework for the FMCR provides a continuum of focus between the two major segments of physician education: medical school and residency.

  3. Monitoring supports performance in a dual-task paradigm involving a risky decision-making task and a working memory task

    PubMed Central

    Gathmann, Bettina; Schiebener, Johannes; Wolf, Oliver T.; Brand, Matthias

    2015-01-01

    Performing two cognitively demanding tasks at the same time is known to decrease performance. The current study investigates the underlying executive functions of a dual-tasking situation involving the simultaneous performance of decision making under explicit risk and a working memory task. It is suggested that making a decision and performing a working memory task at the same time should particularly require monitoring—an executive control process supervising behavior and the state of processing on two tasks. To test the role of a supervisory/monitoring function in such a dual-tasking situation we investigated 122 participants with the Game of Dice Task plus 2-back task (GDT plus 2-back task). This dual task requires participants to make decisions under risk and to perform a 2-back working memory task at the same time. Furthermore, a task measuring a set of several executive functions gathered in the term concept formation (Modified Card Sorting Test, MCST) and the newly developed Balanced Switching Task (BST), measuring monitoring in particular, were used. The results demonstrate that concept formation and monitoring are involved in the simultaneous performance of decision making under risk and a working memory task. In particular, the mediation analysis revealed that BST performance partially mediates the influence of MCST performance on the GDT plus 2-back task. These findings suggest that monitoring is one important subfunction for superior performance in a dual-tasking situation including decision making under risk and a working memory task. PMID:25741308

  4. Orchid: a novel management, annotation and machine learning framework for analyzing cancer mutations.

    PubMed

    Cario, Clinton L; Witte, John S

    2018-03-15

    As whole-genome tumor sequence and biological annotation datasets grow in size, number and content, there is an increasing basic science and clinical need for efficient and accurate data management and analysis software. With the emergence of increasingly sophisticated data stores, execution environments and machine learning algorithms, there is also a need for the integration of functionality across frameworks. We present orchid, a python based software package for the management, annotation and machine learning of cancer mutations. Building on technologies of parallel workflow execution, in-memory database storage and machine learning analytics, orchid efficiently handles millions of mutations and hundreds of features in an easy-to-use manner. We describe the implementation of orchid and demonstrate its ability to distinguish tissue of origin in 12 tumor types based on 339 features using a random forest classifier. Orchid and our annotated tumor mutation database are freely available at https://github.com/wittelab/orchid. Software is implemented in python 2.7, and makes use of MySQL or MemSQL databases. Groovy 2.4.5 is optionally required for parallel workflow execution. JWitte@ucsf.edu. Supplementary data are available at Bioinformatics online.

  5. ITS strategic deployment plan : executive summary

    DOT National Transportation Integrated Search

    1997-02-27

    The Salt Lake Valley ITS Early Deployment Planning Study-Phase II prepared the framework to deploy candidate Intelligent Transportation System (ITS) projects that address Salt Lake Valleys transportation needs. A planning process was used that inc...

  6. COMT val158met and 5-HTTLPR Genetic Polymorphisms Moderate Executive Control in Cannabis Users

    PubMed Central

    Verdejo-García, Antonio; Beatriz Fagundo, Ana; Cuenca, Aida; Rodriguez, Joan; Cuyás, Elisabet; Langohr, Klaus; de Sola Llopis, Susana; Civit, Ester; Farré, Magí; Peña-Casanova, Jordi; de la Torre, Rafael

    2013-01-01

    The adverse effects of cannabis use on executive functions are still controversial, fostering the need for novel biomarkers able to unveil individual differences in the cognitive impact of cannabis consumption. Two common genetic polymorphisms have been linked to the neuroadaptive impact of Δ9-tetrahydrocannabinol (THC) exposure and to executive functions in animals: the catechol-O-methyltransferase (COMT) gene val158met polymorphism and the SLC6A4 gene 5-HTTLPR polymorphism. We aimed to test if these polymorphisms moderate the harmful effects of cannabis use on executive function in young cannabis users. We recruited 144 participants: 86 cannabis users and 58 non-drug user controls. Both groups were genotyped and matched for genetic makeup, sex, age, education, and IQ. We used a computerized neuropsychological battery to assess different aspects of executive functions: sustained attention (CANTAB Rapid Visual Information Processing Test, RVIP), working memory (N-back), monitoring/shifting (CANTAB ID/ED set shifting), planning (CANTAB Stockings of Cambridge, SOC), and decision-making (Iowa Gambling Task, IGT). We used general linear model-based analyses to test performance differences between cannabis users and controls as a function of genotypes. We found that: (i) daily cannabis use is not associated with executive function deficits; and (ii) COMT val158met and 5-HTTLPR polymorphisms moderate the link between cannabis use and executive performance. Cannabis users carrying the COMT val/val genotype exhibited lower accuracy of sustained attention, associated with a more strict response bias, than val/val non-users. Cannabis users carrying the COMT val allele also committed more monitoring/shifting errors than cannabis users carrying the met/met genotype. Finally, cannabis users carrying the 5-HTTLPR s/s genotype had worse IGT performance than s/s non-users. COMT and SLC6A4 genes moderate the impact of cannabis use on executive functions. PMID:23449176

  7. User-level framework for performance monitoring of HPC applications

    NASA Astrophysics Data System (ADS)

    Hristova, R.; Goranov, G.

    2013-10-01

    HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.

  8. Post-game analysis: An initial experiment for heuristic-based resource management in concurrent systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.

    1987-01-01

    In concurrent systems, a major responsibility of the resource management system is to decide how the application program is to be mapped onto the multi-processor. Instead of using abstract program and machine models, a generate-and-test framework known as 'post-game analysis' that is based on data gathered during program execution is proposed. Each iteration consists of (1) (a simulation of) an execution of the program; (2) analysis of the data gathered; and (3) the proposal of a new mapping that would have a smaller execution time. These heuristics are applied to predict execution time changes in response to small perturbations applied to the current mapping. An initial experiment was carried out using simple strategies on 'pipeline-like' applications. The results obtained from four simple strategies demonstrated that for this kind of application, even simple strategies can produce acceptable speed-up with a small number of iterations.

  9. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  10. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1990-01-01

    Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.

  11. Autonomous Mission Manager for Rendezvous, Inspection and Mating

    NASA Technical Reports Server (NTRS)

    Zimpfer, Douglas J.

    2003-01-01

    To meet cost and safety objectives, space missions that involve proximity operations between two vehicles require a high level of autonomy to successfully complete their missions. The need for autonomy is primarily driven by the need to conduct complex operations outside of communication windows, and the communication time delays inherent in space missions. Autonomy also supports the goals of both NASA and the DOD to make space operations more routine, and lower operational costs by reducing the requirement for ground personnel. NASA and the DoD have several programs underway that require a much higher level of autonomy for space vehicles. NASA's Space Launch Initiative (SLI) program has ambitious goals of reducing costs by a factor or 10 and improving safety by a factor of 100. DARPA has recently begun its Orbital Express to demonstrate key technologies to make satellite servicing routine. The Air Force's XSS-ll program is developing a protoflight demonstration of an autonomous satellite inspector. A common element in space operations for many NASA and DOD missions is the ability to rendezvous, inspect anclJor dock with another spacecraft. For DARPA, this is required to service or refuel military satellites. For the Air Force, this is required to inspect un-cooperative resident space objects. For NASA, this is needed to meet the primary SLI design reference mission of International Space Station re-supply. A common aspect for each of these programs is an Autonomous Mission Manager that provides highly autonomous planning, execution and monitoring of the rendezvous, inspection and docking operations. This paper provides an overview of the Autonomous Mission Manager (AMM) design being incorporated into many of these technology programs. This AMM provides a highly scalable level of autonomous operations, ranging from automatic execution of ground-derived plans to highly autonomous onboard planning to meet ground developed mission goals. The AMM provides the capability to automatically execute the plans and monitor the system performance. In the event of system dispersions or failures the AMM can modify plans or abort to assure overall system safety. This paper describes the design and functionality of Draper's AMM framework, presents concept of operations associated with the use of the AMM, and outlines the relevant features of the flight demonstrations.

  12. Traffic monitoring using satellite and ground data : preparation for feasibility tests and an operational system, executive summary.

    DOT National Transportation Integrated Search

    2000-04-01

    Satellite imagery could conceivably be added to data traditionally collected in traffic monitoring programs to allow wide spatial coverage unobtainable from ground-based sensors in a safe, off-the-road environment. Previously, we estimated that 1-m r...

  13. It's All About the Data: Workflow Systems and Weather

    NASA Astrophysics Data System (ADS)

    Plale, B.

    2009-05-01

    Digital data is fueling new advances in the computational sciences, particularly geospatial research as environmental sensing grows more practical through reduced technology costs, broader network coverage, and better instruments. e-Science research (i.e., cyberinfrastructure research) has responded to data intensive computing with tools, systems, and frameworks that support computationally oriented activities such as modeling, analysis, and data mining. Workflow systems support execution of sequences of tasks on behalf of a scientist. These systems, such as Taverna, Apache ODE, and Kepler, when built as part of a larger cyberinfrastructure framework, give the scientist tools to construct task graphs of execution sequences, often through a visual interface for connecting task boxes together with arcs representing control flow or data flow. Unlike business processing workflows, scientific workflows expose a high degree of detail and control during configuration and execution. Data-driven science imposes unique needs on workflow frameworks. Our research is focused on two issues. The first is the support for workflow-driven analysis over all kinds of data sets, including real time streaming data and locally owned and hosted data. The second is the essential role metadata/provenance collection plays in data driven science, for discovery, determining quality, for science reproducibility, and for long-term preservation. The research has been conducted over the last 6 years in the context of cyberinfrastructure for mesoscale weather research carried out as part of the Linked Environments for Atmospheric Discovery (LEAD) project. LEAD has pioneered new approaches for integrating complex weather data, assimilation, modeling, mining, and cyberinfrastructure systems. Workflow systems have the potential to generate huge volumes of data. Without some form of automated metadata capture, either metadata description becomes largely a manual task that is difficult if not impossible under high-volume conditions, or the searchability and manageability of the resulting data products is disappointingly low. The provenance of a data product is a record of its lineage, or trace of the execution history that resulted in the product. The provenance of a forecast model result, e.g., captures information about the executable version of the model, configuration parameters, input data products, execution environment, and owner. Provenance enables data to be properly attributed and captures critical parameters about the model run so the quality of the result can be ascertained. Proper provenance is essential to providing reproducible scientific computing results. Workflow languages used in science discovery are complete programming languages, and in theory can support any logic expressible by a programming language. The execution environments supporting the workflow engines, on the other hand, are subject to constraints on physical resources, and hence in practice the workflow task graphs used in science utilize relatively few of the cataloged workflow patterns. It is important to note that these workflows are executed on demand, and are executed once. Into this context is introduced the need for science discovery that is responsive to real time information. If we can use simple programming models and abstractions to make scientific discovery involving real-time data accessible to specialists who share and utilize data across scientific domains, we bring science one step closer to solving the largest of human problems.

  14. Lessons from the management of acute myocardial infarction.

    PubMed

    Pearson, M

    2005-05-01

    The National Service Framework for coronary heart disease set a number of challenging targets for the care of patients following an acute myocardial infarction. The Myocardial Infarction National Audit Project (MINAP) was devised to monitor progress and has been notably successful in winning professional support and participation and helping trusts to meet these targets. The new challenge is in translating this success to other areas of medicine. Heart failure is one such area, although it poses a number of difficulties relating primarily to disease definition and the definition of a successful outcome. MINAP was overseen by a multidisciplinary group of stakeholders, including patient organisations, and was project managed by a professionally led team at the Royal College of Physicians. Successful projects must retain confidence of all stakeholders and in part this depends on ensuring that timelines are met. Central monitoring of returns and anticipation of problems has been an important component of data completeness and quality. Next day updates to those collecting the data and more detailed quarterly reports for clinicians and chief executives within days of quarter end have been vital. Change depends on clinicians and managers working together. But most importantly, the attention to detail outlined above means the data have been believed and the resulting change for patients has been remarkable.

  15. DEVELOPING A FRAMEWORK FOR PERFORMANCE MONITORING TO ASSESS THE USE OF MONITORED NATURAL ATTENUATION FOR REMEDIATION OF INORGANIC CONTAMINANTS IN GROUND WATER

    EPA Science Inventory

    The USEPA is leading an effort to develop technical documentation that provides the policy, scientific and technical framework for assessing the viability of MNA for inorganic contaminants in ground water (hereafter referred to as the Framework Document). Initial guidance on the...

  16. Diagnosis and Threat Detection Capabilities of the SERENITY Monitoring Framework

    NASA Astrophysics Data System (ADS)

    Tsigkritis, Theocharis; Spanoudakis, George; Kloukinas, Christos; Lorenzoli, Davide

    The SERENITY monitoring framework offers mechanisms for diagnosing the causes of violations of security and dependability (S&D) properties and detecting potential violations of such properties, called "Cthreats". Diagnostic information and threat detection are often necessary for deciding what an appropriate reaction to a violation is and taking pre-emptive actions against predicted violations, respectively. In this chapter, we describe the mechanisms of the SERENITY monitoring framework which generate diagnostic information for violations of S&D properties and detecting threats.

  17. Watchdog activity monitor (WAM) for use wth high coverage processor self-test

    NASA Technical Reports Server (NTRS)

    Tulpule, Bhalchandra R. (Inventor); Crosset, III, Richard W. (Inventor); Versailles, Richard E. (Inventor)

    1988-01-01

    A high fault coverage, instruction modeled self-test for a signal processor in a user environment is disclosed. The self-test executes a sequence of sub-tests and issues a state transition signal upon the execution of each sub-test. The self-test may be combined with a watchdog activity monitor (WAM) which provides a test-failure signal in the presence of a counted number of state transitions not agreeing with an expected number. An independent measure of time may be provided in the WAM to increase fault coverage by checking the processor's clock. Additionally, redundant processor systems are protected from inadvertent unsevering of a severed processor using a unique unsever arming technique and apparatus.

  18. Investigator initiated IBD trials in the US: Facts, obstacles and answers

    PubMed Central

    Herfarth, Hans H; Jackson, Susan; Schliebe, Barbara G.; Martin, Christopher; Ivanova, Anastasia; Anton, Kristen; Sandler, Robert S; Long, Millie D; Isaacs, Kim L; Osterman, Mark T; Sands, Bruce E; Higgins, Peter D; Lewis, James D

    2016-01-01

    Investigator initiated randomized clinical trials (IITs) are the backbone of academic clinical research. IITs complement the large clinical studies sponsored by industry and address questions, which are usually not the main focus of a commercially directed research but have the purpose to confirm, improve or refute clinically important questions with regard to diagnostic and therapeutic approaches in patient care. The aim of this review is to illustrate the necessary steps to start and complete an IIT in the field of inflammatory bowel diseases (IBD) in the US. The initial milestones for an investigator include structuring a protocol, planning and building of the trial infrastructure, accurately estimating the costs of the trial and gauging the time span for recruitment. Once the trial has begun it is important to keep patient recruitment on target, monitor of the data quality, and document treatment emergent adverse events. This article provides a framework for the different phases of an IIT and outlines potential hurdles, which could hinder a successful execution. PMID:27598744

  19. The ALICE analysis train system

    NASA Astrophysics Data System (ADS)

    Zimmermann, Markus; ALICE Collaboration

    2015-05-01

    In the ALICE experiment hundreds of users are analyzing big datasets on a Grid system. High throughput and short turn-around times are achieved by a centralized system called the LEGO trains. This system combines analysis from different users in so-called analysis trains which are then executed within the same Grid jobs thereby reducing the number of times the data needs to be read from the storage systems. The centralized trains improve the performance, the usability for users and the bookkeeping in comparison to single user analysis. The train system builds upon the already existing ALICE tools, i.e. the analysis framework as well as the Grid submission and monitoring infrastructure. The entry point to the train system is a web interface which is used to configure the analysis and the desired datasets as well as to test and submit the train. Several measures have been implemented to reduce the time a train needs to finish and to increase the CPU efficiency.

  20. Automated procedure execution for space vehicle autonomous control

    NASA Technical Reports Server (NTRS)

    Broten, Thomas A.; Brown, David A.

    1990-01-01

    Increased operational autonomy and reduced operating costs have become critical design objectives in next-generation NASA and DoD space programs. The objective is to develop a semi-automated system for intelligent spacecraft operations support. The Spacecraft Operations and Anomaly Resolution System (SOARS) is presented as a standardized, model-based architecture for performing High-Level Tasking, Status Monitoring and automated Procedure Execution Control for a variety of spacecraft. The particular focus is on the Procedure Execution Control module. A hierarchical procedure network is proposed as the fundamental means for specifying and representing arbitrary operational procedures. A separate procedure interpreter controls automatic execution of the procedure, taking into account the current status of the spacecraft as maintained in an object-oriented spacecraft model.

  1. On the Information Content of Program Traces

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Program traces are used for analysis of program performance, memory utilization, and communications as well as for program debugging. The trace contains records of execution events generated by monitoring units inserted into the program. The trace size limits the resolution of execution events and restricts the user's ability to analyze the program execution. We present a study of the information content of program traces and develop a coding scheme which reduces the trace size to the limit given by the trace entropy. We apply the coding to the traces of AIMS instrumented programs executed on the IBM SPA and the SCSI Power Challenge and compare it with other coding methods. Our technique shows size of the trace can be reduced by more than a factor of 5.

  2. Development and testing of operational incident detection algorithms : executive summary

    DOT National Transportation Integrated Search

    1997-09-01

    This report describes the development of operational surveillance data processing algorithms and software for application to urban freeway systems, conforming to a framework in which data processing is performed in stages: sensor malfunction detectio...

  3. Florida spaceports : an analysis of the regulatory framework : summary.

    DOT National Transportation Integrated Search

    2010-12-01

    Until recently, government control : has restricted space flight to a few : highly trained persons executing : missions in the public interest : using a very limited number : of facilities and vehicles. This : environment is changing. Imaging : and c...

  4. A Grid job monitoring system

    NASA Astrophysics Data System (ADS)

    Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir

    2010-04-01

    This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components : (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.

  5. Indicators for Universal Health Coverage: can Kenya comply with the proposed post-2015 monitoring recommendations?

    PubMed

    Obare, Valerie; Brolan, Claire E; Hill, Peter S

    2014-12-20

    Universal Health Coverage (UHC), referring to access to healthcare without financial burden, has received renewed attention in global health spheres. UHC is a potential goal in the post-2015 development agenda. Monitoring of progress towards achieving UHC is thus critical at both country and global level, and a monitoring framework for UHC was proposed by a joint WHO/World Bank discussion paper in December 2013. The aim of this study was to determine the feasibility of the framework proposed by WHO/World Bank for global UHC monitoring framework in Kenya. The study utilised three documents--the joint WHO/World Bank UHC monitoring framework and its update, and the Bellagio meeting report sponsored by WHO and the Rockefeller Foundation--to conduct the research. These documents informed the list of potential indicators that were used to determine the feasibility of the framework. A purposive literature search was undertaken to identify key government policy documents and relevant scholarly articles. A desk review of the literature was undertaken to answer the research objectives of this study. Kenya has yet to establish an official policy on UHC that provides a clear mandate on the goals, targets and monitoring and evaluation of performance. However, a significant majority of Kenyans continue to have limited access to health services as well as limited financial risk protection. The country has the capacity to reasonably report on five out of the seven proposed UHC indicators. However, there was very limited capacity to report on the two service coverage indicators for the chronic condition and injuries (CCIs) interventions. Out of the potential tracer indicators (n = 27) for aggregate CCI-related measures, four tracer indicators were available. Moreover the country experiences some wider challenges that may impact on the implementation and feasibility of the WHO/World Bank framework. The proposed global framework for monitoring UHC will only be feasible in Kenya if systemic challenges are addressed. While the infrastructure for reporting the MDG related indicators is in place, Kenya will require continued international investment to extend its capacity to meet the data requirements of the proposed UHC monitoring framework, particularly for the CCI-related indicators.

  6. Nature of motor control: perspectives and issues.

    PubMed

    Turvey, Michael T; Fonseca, Sergio

    2009-01-01

    Four perspectives on motor control provide the framework for developing a comprehensive theory of motor control in biological systems. The four perspectives, of decreasing orthodoxy, are distinguished by their sources of inspiration: neuroanatomy, robotics, self-organization, and ecological realities. Twelve major issues that commonly constrain (either explicitly or implicitly) the understanding of the control and coordination of movement are identified and evaluated within the framework of the four perspectives. The issues are as follows: (1) Is control strictly neural? (2) Is there a divide between planning and execution? (3) Does control entail a frequently involved knowledgeable executive? (4) Do analytical internal models mediate control? (5) Is anticipation necessarily model dependent? (6) Are movements preassembled? (7) Are the participating components context independent? (8) Is force transmission strictly myotendinous? (9) Is afference a matter of local linear signaling? (10) Is neural noise an impediment? (11) Do standard variables (of mechanics and physiology) suffice? (12) Is the organization of control hierarchical?

  7. Nature of Motor Control: Perspectives and Issues

    PubMed Central

    Turvey, M. T.; Fonseca, Sergio

    2013-01-01

    Four perspectives on motor control provide the framework for developing a comprehensive theory of motor control in biological systems. The four perspectives, of decreasing orthodoxy, are distinguished by their sources of inspiration: neuroanatomy, robotics, self-organization, and ecological realities. Twelve major issues that commonly constrain (either explicitly or implicitly) the understanding of the control and coordination of movement are identified and evaluated within the framework of the four perspectives. The issues are as follows: (1) Is control strictly neural? (2) Is there a divide between planning and execution? (3) Does control entail a frequently involved knowledgeable executive? (4) Do analytical internal models mediate control? (5) Is anticipation necessarily model dependent? (6) Are movements preassembled? (7) Are the participating components context independent? (8) Is force transmission strictly myotendinous? (9) Is afference a matter of local linear signaling? (10) Is neural noise an impediment? (11) Do standard variables (of mechanics and physiology) suffice? (12) Is the organization of control hierarchical? PMID:19227497

  8. Master of Puppets: Cooperative Multitasking for In Situ Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Lukic, Zarija

    2016-01-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less

  9. Henson v1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monozov, Dmitriy; Lukie, Zarija

    2016-04-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. The developers present a novel design for running multiple codes in situ: using coroutines and position-independent executables they enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. They present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. Our design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The presented techniques can also be integrated into other in situ frameworks.« less

  10. NEAMS-IPL MOOSE Framework Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaughter, Andrew Edward; Permann, Cody James; Kong, Fande

    The Multiapp Picard iteration Milestone’s purpose was to support a framework level “tight-coupling” method within the hierarchical Multiapp’s execution scheme. This new solution scheme gives developers new choices for running multiphysics applications, particularly those with very strong nonlinear effects or those requiring coupling across disparate time or spatial scales. Figure 1 shows a typical Multiapp setup in MOOSE. Each node represents a separate simulation containing a separate equation system. MOOSE solves the equation system on each node in turn, in a user-controlled manner. Information can be aggregated or split and transferred from parent to child or child to parent asmore » needed between solves. Performing a tightly coupled execution scheme using this method wasn’t possible in the original implementation. This is was due to the inability to back up to a previous state once a converged solution was accepted at a particular Multiapp level.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoette, Trisha Marie

    Throughout history, as new chemical threats arose, strategies for the defense against chemical attacks have also evolved. As a part of an Early Career Laboratory Directed Research and Development project, a systems analysis of past, present, and future chemical terrorism scenarios was performed to understand how the chemical threats and attack strategies change over time. For the analysis, the difficulty in executing chemical attack was evaluated within a framework of three major scenario elements. First, historical examples of chemical terrorism were examined to determine how the use of chemical threats, versus other weapons, contributed to the successful execution of themore » attack. Using the same framework, the future of chemical terrorism was assessed with respect to the impact of globalization and new technologies. Finally, the efficacy of the current defenses against contemporary chemical terrorism was considered briefly. The results of this analysis justify the need for continued diligence in chemical defense.« less

  12. Model-independent partial wave analysis using a massively-parallel fitting framework

    NASA Astrophysics Data System (ADS)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  13. Asterism: an integrated, complete, and open-source approach for running seismologist continuous data-intensive analysis on heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Ferreira da Silva, R.; Filgueira, R.; Deelman, E.; Atkinson, M.

    2016-12-01

    We present Asterism, an open source data-intensive framework, which combines the Pegasus and dispel4py workflow systems. Asterism aims to simplify the effort required to develop data-intensive applications that run across multiple heterogeneous resources, without users having to: re-formulate their methods according to different enactment systems; manage the data distribution across systems; parallelize their methods; co-place and schedule their methods with computing resources; and store and transfer large/small volumes of data. Asterism's key element is to leverage the strengths of each workflow system: dispel4py allows developing scientific applications locally and then automatically parallelize and scale them on a wide range of HPC infrastructures with no changes to the application's code; Pegasus orchestrates the distributed execution of applications while providing portability, automated data management, recovery, debugging, and monitoring, without users needing to worry about the particulars of the target execution systems. Asterism leverages the level of abstractions provided by each workflow system to describe hybrid workflows where no information about the underlying infrastructure is required beforehand. The feasibility of Asterism has been evaluated using the seismic ambient noise cross-correlation application, a common data-intensive analysis pattern used by many seismologists. The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The Asterism workflow is implemented as a Pegasus workflow composed of two tasks (Phase1 and Phase2), where each phase represents a dispel4py workflow. Pegasus tasks describe the in/output data at a logical level, the data dependency between tasks, and the e-Infrastructures and the execution engine to run each dispel4py workflow. We have instantiated the workflow using data from 1000 stations from the IRIS services, and run it across two heterogeneous resources described as Docker containers: MPI (Container2) and Storm (Container3) clusters (Figure 1). Each dispel4py workflow is mapped to a particular execution engine, and data transfers between resources are automatically handled by Pegasus. Asterism is freely available online at http://github.com/dispel4py/pegasus_dispel4py.

  14. The dorsal anterior cingulate cortex is selective for pain: Results from large-scale reverse inference

    PubMed Central

    Lieberman, Matthew D.; Eisenberger, Naomi I.

    2015-01-01

    Dorsal anterior cingulate cortex (dACC) activation is commonly observed in studies of pain, executive control, conflict monitoring, and salience processing, making it difficult to interpret the dACC’s specific psychological function. Using Neurosynth, an automated brainmapping database [of over 10,000 functional MRI (fMRI) studies], we performed quantitative reverse inference analyses to explore the best general psychological account of the dACC function P(Ψ process|dACC activity). Results clearly indicated that the best psychological description of dACC function was related to pain processing—not executive, conflict, or salience processing. We conclude by considering that physical pain may be an instance of a broader class of survival-relevant goals monitored by the dACC, in contrast to more arbitrary temporary goals, which may be monitored by the supplementary motor area. PMID:26582792

  15. Grand challenges for integrated USGS science—A workshop report

    USGS Publications Warehouse

    Jenni, Karen E.; Goldhaber, Martin B.; Betancourt, Julio L.; Baron, Jill S.; Bristol, R. Sky; Cantrill, Mary; Exter, Paul E.; Focazio, Michael J.; Haines, John W.; Hay, Lauren E.; Hsu, Leslie; Labson, Victor F.; Lafferty, Kevin D.; Ludwig, Kristin A.; Milly, Paul C. D.; Morelli, Toni L.; Morman, Suzette A.; Nassar, Nedal T.; Newman, Timothy R.; Ostroff, Andrea C.; Read, Jordan S.; Reed, Sasha C.; Shapiro, Carl D.; Smith, Richard A.; Sanford, Ward E.; Sohl, Terry L.; Stets, Edward G.; Terando, Adam J.; Tillitt, Donald E.; Tischler, Michael A.; Toccalino, Patricia L.; Wald, David J.; Waldrop, Mark P.; Wein, Anne; Weltzin, Jake F.; Zimmerman, Christian E.

    2017-06-30

    Executive SummaryThe U.S. Geological Survey (USGS) has a long history of advancing the traditional Earth science disciplines and identifying opportunities to integrate USGS science across disciplines to address complex societal problems. The USGS science strategy for 2007–2017 laid out key challenges in disciplinary and interdisciplinary arenas, culminating in a call for increased focus on a number of crosscutting science directions. Ten years on, to further the goal of integrated science and at the request of the Executive Leadership Team (ELT), a workshop with three dozen invited scientists spanning different disciplines and career stages in the Bureau convened on February 7–10, 2017, at the USGS John Wesley Powell Center for Analysis and Synthesis in Fort Collins, Colorado.The workshop focused on identifying “grand challenges” for integrated USGS science. Individual participants identified nearly 70 potential grand challenges before the workshop and through workshop discussions. After discussion, four overarching grand challenges emerged:Natural resource security,Societal risk from existing and emerging threats,Smart infrastructure development, andAnticipatory science for changing landscapes.Participants also identified a “comprehensive science challenge” that highlights the development of integrative science, data, models, and tools—all interacting in a modular framework—that can be used to address these and other future grand challenges:Earth Monitoring, Analyses, and Projections (EarthMAP)EarthMAP is our long-term vision for an integrated scientific framework that spans traditional scientific boundaries and disciplines, and integrates the full portfolio of USGS science: research, monitoring, assessment, analysis, and information delivery.The Department of Interior, and the Nation in general, have a vast array of information needs. The USGS meets these needs by having a broadly trained and agile scientific workforce. Encouraging and supporting cross-discipline engagement would position the USGS to tackle complex and multifaceted scientific and societal challenges in the 21st Century.

  16. Reality Monitoring and Metamemory in Adults with Autism Spectrum Conditions

    ERIC Educational Resources Information Center

    Cooper, Rose A.; Plaisted-Grant, Kate C.; Baron-Cohen, Simon; Simons, Jon S.

    2016-01-01

    Studies of reality monitoring (RM) often implicate medial prefrontal cortex (mPFC) in distinguishing internal and external information, a region linked to autism-related deficits in social and self-referential information processing, executive function, and memory. This study used two RM conditions (self-other; perceived-imagined) to investigate…

  17. Review of traffic monitoring factor groupings and the determination of seasonal adjustment factors for cars and trucks : executive summary report.

    DOT National Transportation Integrated Search

    2009-11-01

    One objective of statewide traffic monitoring : programs is to accurately estimate the Annual : Average Daily Traffic (AADT) for many roadway : segments within the state. The majority of the : departments of transportation (DOT) in the United : State...

  18. Attentional orienting and executive control are affected by different types of meditation practice.

    PubMed

    Tsai, Min-Hui; Chou, Wei-Lun

    2016-11-01

    Several studies have demonstrated the beneficial effects of meditation on attention. The present study investigated the relationship between focused attention (FA) and open monitoring (OM) meditation skills and the various functions of attention. In Experiment 1, we executed the attention network test and compared the performance of experts on dandao meditation with that of ordinary people on this test. The results indicated that the experts specializing in OM meditation demonstrated greater attentional orienting ability compared with those specializing in FA meditation and the control group. In addition, both expert groups registered improvements in their executive control abilities compared with the control group. In Experiment 2, we trained beginners in FA meditation for 3months. The results showed that the experimental group exhibited significantly enhanced executive control ability. We infer that FA meditation skills promote executive control function and OM meditation skills promote both executive control and attentional orienting functions. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Vertical and Horizontal Forces: A Framework for Understanding Airpower Command and Control

    DTIC Science & Technology

    2014-05-22

    failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ...ABSTRACT The Air Force has long maintained the tenet of “centralized control , decentralized execution.” Changes in the contextual environment and...help commanders understand how command and control (C2) systems work best today. The proposed cognitive framework moves beyond centralization

  20. Converting a Manned LCU into an Unmanned Surface Vehicle (USV): An Open Systems Architecture (OSA) Case Study

    DTIC Science & Technology

    2014-09-01

    pdf. Musk , Elon . 2014. Statement Of Elon Musk , Ceo & Chief Designer, Space Exploration Technologies Corp. (Spacex), Before The Committee On...every year moving forward ( Musk 2014)? These questions build the framework for executing OSA throughout an SE program. The OSA framework includes a...systems must be well maintained to the current legal environment. Maintaining this doctrine requires a continuous feedback loop from unmanned systems

  1. New data model with better functionality for VLab

    NASA Astrophysics Data System (ADS)

    da Silveira, P. R.; Wentzcovitch, R. M.; Karki, B. B.

    2009-12-01

    The VLab infrastructure and architecture was further developed to allow for several new features. First, workflows for first principles calculations of thermodynamics properties and static elasticity programmed in Java as Web Services can now be executed by multiple users. Second, jobs generated by these workflows can now be executed in batch in multiple servers. A simple internal schedule was implemented to handle hundreds of execution packages generated by multiple users and avoid the overload on servers. Third, a new data model was implemented to guarantee integrity of a project (workflow execution) in case of failure. The latter can happen in an execution package or in a workflow phase. By recording all executed steps of a project, its execution can be resumed after dynamic alteration of parameters through the VLab Portal. Fourth, batch jobs can also be monitored through the portal. Now, better and faster interaction with servers is achieved using Ajax technology. Finally, plots are now created on the Vlab server using Gnuplot 4.2.2. Research supported by NSF grants ATM 0428774 (VLab). Vlab is hosted by the Minnesota Supercomputing Institute.

  2. Arden Syntax Clinical Foundation Framework for Event Monitoring in Intensive Care Units: Report on a Pilot Study.

    PubMed

    de Bruin, Jeroen S; Zeckl, Julia; Adlassnig, Katharina; Blacky, Alexander; Koller, Walter; Rappelsberger, Andrea; Adlassnig, Klaus-Peter

    2017-01-01

    The creation of clinical decision support systems has received a strong impulse over the last years, but their integration into a clinical routine has lagged behind, partly due to a lack of interoperability and trust by physicians. We report on the implementation of a clinical foundation framework in Arden Syntax, comprising knowledge units for (a) preprocessing raw clinical data, (b) the determination of single clinical concepts, and (c) more complex medical knowledge, which can be modeled through the composition and configuration of knowledge units in this framework. Thus, it can be tailored to clinical institutions or patients' caregivers. In the present version, we integrated knowledge units for several infection-related clinical concepts into the framework and developed a clinical event monitoring system over the framework that employs three different scenarios for monitoring clinical signs of bloodstream infection. The clinical event monitoring system was tested using data from intensive care units at Vienna General Hospital, Austria.

  3. Guidance for the application of a population modeling framework in coordination with field based monitoring studies for multiple species and sites

    EPA Science Inventory

    A modeling framework was developed that can be applied in conjunction with field based monitoring efforts (e.g., through effects-based monitoring programs) to link chemically-induced alterations in molecular and biochemical endpoints to adverse outcomes in whole organisms and pop...

  4. A distributed cloud-based cyberinfrastructure framework for integrated bridge monitoring

    NASA Astrophysics Data System (ADS)

    Jeong, Seongwoon; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.

    2017-04-01

    This paper describes a cloud-based cyberinfrastructure framework for the management of the diverse data involved in bridge monitoring. Bridge monitoring involves various hardware systems, software tools and laborious activities that include, for examples, a structural health monitoring (SHM), sensor network, engineering analysis programs and visual inspection. Very often, these monitoring systems, tools and activities are not coordinated, and the collected information are not shared. A well-designed integrated data management framework can support the effective use of the data and, thereby, enhance bridge management and maintenance operations. The cloud-based cyberinfrastructure framework presented herein is designed to manage not only sensor measurement data acquired from the SHM system, but also other relevant information, such as bridge engineering model and traffic videos, in an integrated manner. For the scalability and flexibility, cloud computing services and distributed database systems are employed. The information stored can be accessed through standard web interfaces. For demonstration, the cyberinfrastructure system is implemented for the monitoring of the bridges located along the I-275 Corridor in the state of Michigan.

  5. Measurement in Service Businesses: Challenges and Future Directions

    NASA Astrophysics Data System (ADS)

    Tyagi, Rajesh Kumar

    This chapter presents challenges faced by service businesses while implementing a measurement system. A review of existing frameworks is presented and a new framework, the Service Scorecard, is introduced. The Service Scorecard is an adaptation of the Six Sigma Business Scorecard for the service sector. The framework has also been influenced by existing frameworks such as the Malcom Baldrige award criteria, the Balanced Scorecard, the European Quality award and the Service Profit Chain model. The seven elements of the Service Scorecard are Growth, Leadership, Acceleration, Collaboration, Innovation, Execution, and Retention. The examples of measurement systems are presented with concrete real-world case examples. Final thoughts and the challenges faced are also presented.

  6. A Bidirectional Relationship between Executive Function and Health Behavior: Evidence, Implications, and Future Directions

    PubMed Central

    Allan, Julia L.; McMinn, David; Daly, Michael

    2016-01-01

    Physically active lifestyles and other health-enhancing behaviors play an important role in preserving executive function into old age. Conversely, emerging research suggests that executive functions facilitate participation in a broad range of healthy behaviors including physical activity and reduced fatty food, tobacco, and alcohol consumption. They do this by supporting the volition, planning, performance monitoring, and inhibition necessary to enact intentions and override urges to engage in health damaging behavior. Here, we focus firstly on evidence suggesting that health-enhancing behaviors can induce improvements in executive function. We then switch our focus to findings linking executive function to the consistent performance of health-promoting behaviors and the avoidance of health risk behaviors. We suggest that executive function, health behavior, and disease processes are interdependent. In particular, we argue that a positive feedback loop may exist whereby health behavior-induced changes in executive function foster subsequent health-enhancing behaviors, which in turn help sustain efficient executive functions and good health. We conclude by outlining the implications of this reciprocal relationship for intervention strategies, the design of research studies, and the study of healthy aging. PMID:27601977

  7. Teachers' Understanding of the Role of Executive Functions in Mathematics Learning

    PubMed Central

    Gilmore, Camilla; Cragg, Lucy

    2014-01-01

    Cognitive psychology research has suggested an important role for executive functions, the set of skills that monitor and control thought and action, in learning mathematics. However, there is currently little evidence about whether teachers are aware of the importance of these skills and, if so, how they come by this information. We conducted an online survey of teachers' views on the importance of a range of skills for mathematics learning. Teachers rated executive function skills, and in particular inhibition and shifting, to be important for mathematics. The value placed on executive function skills increased with increasing teaching experience. Most teachers reported that they were aware of these skills, although few knew the term “executive functions.” This awareness had come about through their teaching experience rather than from formal instruction. Researchers and teacher educators could do more to highlight the importance of these skills to trainee or new teachers. PMID:25674156

  8. Teachers' Understanding of the Role of Executive Functions in Mathematics Learning.

    PubMed

    Gilmore, Camilla; Cragg, Lucy

    2014-09-01

    Cognitive psychology research has suggested an important role for executive functions, the set of skills that monitor and control thought and action, in learning mathematics. However, there is currently little evidence about whether teachers are aware of the importance of these skills and, if so, how they come by this information. We conducted an online survey of teachers' views on the importance of a range of skills for mathematics learning. Teachers rated executive function skills, and in particular inhibition and shifting, to be important for mathematics. The value placed on executive function skills increased with increasing teaching experience. Most teachers reported that they were aware of these skills, although few knew the term "executive functions." This awareness had come about through their teaching experience rather than from formal instruction. Researchers and teacher educators could do more to highlight the importance of these skills to trainee or new teachers.

  9. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth; Geveci, Berk

    2014-11-01

    The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipelinemore » model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.« less

  10. Framework for Testing the Effectiveness of Bat and Eagle Impact-Reduction Strategies at Wind Energy Projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinclair, Karin; DeGeorge, Elise

    2016-04-13

    The objectives of this framework are to facilitate the study design and execution to test the effectiveness of bat and eagle impact-reduction strategies at wind energy sites. Through scientific field research, the wind industry and its partners can help determine if certain strategies are ready for operational deployment or require further development. This framework should be considered a living document to be improved upon as fatality-reduction technologies advance from the initial concepts to proven readiness (through project- and technology-specific testing) and as scientific field methods improve.

  11. Review of asset hierarchy criticality assessment and risk analysis practices.

    DOT National Transportation Integrated Search

    2014-01-01

    The MTA NYC Transit (NYCT) has begun an enterprise-wide Asset Management Improvement Program (AMIP). In : 2012, NYCT developed an executive-level concept of operations that defined a new asset management : framework following a systems engineering ap...

  12. 3 CFR 13541 - Executive Order 13541 of May 7, 2010. Temporary Organization To Facilitate a Strategic...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... economic, diplomatic, cultural, and security fields based on the Strategic Framework Agreement; (b) assist... appropriations, and consistent with Presidential guidance. (b) Nothing in this order shall be construed to impair...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth; Sewell, Christopher; Usher, William

    Here, one of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth; Sewell, Christopher; Usher, William

    Execution on massively threaded processors is one of the most critical challenges for high-performance computing (HPC) scientific visualization. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Moreover, our current production scientific visualization software is not designed for these new types of architectures. In order to address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.

  15. Effectiveness Through Control: Centralized Execution in Air Mobility Operations

    DTIC Science & Technology

    2013-03-01

    decentralized execution, as “the delegation of authority to designated lower-level commanders and other tactical-level decision makers to achieve effective span...asset “into a flying hospital of sorts with cardiac monitors, defibrillators , intubation devices, litters and various supplies to sustain many types... designated , versus dedicated airlift. A 2006 Air Force Magazine article described the flexibility of this concept, highlighting “When an injured service

  16. [Ecological executive function characteristics and effects of executive function on social adaptive function in school-aged children with epilepsy].

    PubMed

    Xu, X J; Wang, L L; Zhou, N

    2016-02-23

    To explore the characteristics of ecological executive function in school-aged children with idiopathic or probably symptomatic epilepsy and examine the effects of executive function on social adaptive function. A total of 51 school-aged children with idiopathic or probably symptomatic epilepsy aged 5-12 years at our hospital and 37 normal ones of the same gender, age and educational level were included. The differences in ecological executive function and social adaptive function were compared between the two groups with the Behavior Rating Inventory of Executive Function (BRIEF) and Child Adaptive Behavior Scale, the Pearson's correlation test and multiple stepwise linear regression were used to explore the impact of executive function on social adaptive function. The scores of school-aged children with idiopathic or probably symptomatic epilepsy in global executive composite (GEC), behavioral regulation index (BRI) and metacognition index (MI) of BRIEF ((62±12), (58±13) and (63±12), respectively) were significantly higher than those of the control group ((47±7), (44±6) and (48±8), respectively))(P<0.01). The scores of school-aged children with idiopathic or probably symptomatic epilepsy in adaptive behavior quotient (ADQ), independence, cognition, self-control ((86±22), (32±17), (49±14), (41±16), respectively) were significantly lower than those of the control group ((120±12), (59±14), (59±7) and (68±10), respectively))(P<0.01). Pearson's correlation test showed that the scores of BRIEF, such as GEC, BRI, MI, inhibition, emotional control, monitoring, initiation and working memory had significantly negative correlations with the score of ADQ, independence, self-control ((r=-0.313--0.741, P<0.05)). Also, GEC, inhibition, MI, initiation, working memory, plan, organization and monitoring had significantly negative correlations with the score of cognition ((r=-0.335--0.437, P<0.05)); Multiple stepwise linear regression analysis showed that BRI, inhibition and working memory were closely related with the social adaptive function of school-aged children with idiopathic or probably symptomatic epilepsy. School-aged children with idiopathic or probably symptomatic epilepsy may have significantly ecological executive function impairment and social adaptive function reduction. The aspects of BRI, inhibition and working memory in ecological executive function are significantly related with social adaptive function in school-aged children with epilepsy.

  17. A System for Monitoring Employee Health in a Navy Occupational Setting.

    DTIC Science & Technology

    1980-12-30

    AD0AIGI 064 NAVAL HEALTH RESEARCH CENTER SAN DIEGO CA F/G 6/5 A SYSTEM FOR MONITORING EMPLOYEE HEALTH IN A NAVY OCCUPATIONAL --ETC(U) DEC 80 L...CALORWIM , NAVAL IMEDCAL. REEARC AIND DEVELOPMUT Claww KWh~v, - ji, A System for Monitoring Employee Health in a Navy Occupational Setting Larry Hermansen...and Health Act of 1970 and Executive Order 11807 have engendered a need for a comprehensive system that can monitor and document employee health and

  18. A high-resolution bioclimate map of the world: a unifying framework for global biodiversity research and monitoring

    USGS Publications Warehouse

    Metzger, Marc J.; Bunce, Robert G.H.; Jongman, Rob H.G.; Sayre, Roger G.; Trabucco, Antonio; Zomer, Robert

    2013-01-01

    Main conclusions: The GEnS provides a robust spatial analytical framework for the aggregation of local observations, identification of gaps in current monitoring efforts and systematic design of complementary and new monitoring and research. The dataset is available for non-commercial use through the GEO portal (http://www.geoportal.org).

  19. Framework for Structural Online Health Monitoring of Aging and Degradation of Secondary Systems due to some Aspects of Erosion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gribok, Andrei; Patnaik, Sobhan; Williams, Christian

    This report describes the current state of research related to critical aspects of erosion and selected aspects of degradation of secondary components in nuclear power plants. The report also proposes a framework for online health monitoring of aging and degradation of secondary components. The framework consists of an integrated multi-sensor modality system which can be used to monitor different piping configurations under different degradation conditions. The report analyses the currently known degradation mechanisms and available predictive models. Based on this analysis, the structural health monitoring framework is proposed. The Light Water Reactor Sustainability Program began to evaluate technologies that couldmore » be used to perform online monitoring of piping and other secondary system structural components in commercial NPPs. These online monitoring systems have the potential to identify when a more detailed inspection is needed using real-time measurements, rather than at a pre-determined inspection interval. This transition to condition-based, risk informed automated maintenance will contribute to a significant reduction of operations and maintenance costs that account for the majority of nuclear power generation costs. There is unanimous agreement between industry experts and academic researchers that identifying and prioritizing inspection locations in secondary piping systems (for example, in raw water piping or diesel piping) would eliminate many excessive in-service inspections. The proposed structural health monitoring framework takes aim at answering this challenge by combining long-range guided wave technologies with other monitoring techniques, which can significantly increase the inspection length and pinpoint the locations that degraded the most. More widely, the report suggests research efforts aimed at developing, validating, and deploying online corrosion monitoring techniques for complex geometries, which are pervasive in NPPs.« less

  20. The role of executive functions in social impairment in Autism Spectrum Disorder.

    PubMed

    Leung, Rachel C; Vogan, Vanessa M; Powell, Tamara L; Anagnostou, Evdokia; Taylor, Margot J

    2016-01-01

    Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized by socio-communicative impairments. Executive dysfunction may explain some key characteristics of ASD, both social and nonsocial hallmarks. Limited research exists exploring the relations between executive function and social impairment in ASD and few studies have used a comparison control group. Thus, the objective of the present study was to investigate the relations between executive functioning using the Behavioral Rating Inventory of Executive Functioning (BRIEF), social impairment as measured by the Social Responsiveness Scale (SRS), and overall autistic symptomology as measured by the Autism Diagnostic Observation Schedule (ADOS) in children and adolescents with and without ASD. Seventy children and adolescents diagnosed with ASD and 71 typically developing controls were included in this study. Findings showed that behavioral regulation executive processes (i.e., inhibition, shifting, and emotional control) predicted social function in all children. However, metacognitive executive processes (i.e., initiation, working memory, planning, organization, and monitoring) predicted social function only in children with ASD and not in typically developing children. Our findings suggest a distinct metacognitive executive function-social symptom link in ASD that is not present in the typical population. Understanding components of executive functioning that contribute to the autistic symptomology, particularly in the socio-communicative domain, is crucial for developing effective interventions that target key executive processes as well as underlying behavioral symptoms.

  1. Control of Cattle Ticks and Tick-Borne Diseases by Acaricide in Southern Province of Zambia: A Retrospective Evaluation of Animal Health Measures According to Current One Health Concepts.

    PubMed

    Laing, Gabrielle; Aragrande, Maurizio; Canali, Massimo; Savic, Sara; De Meneghi, Daniele

    2018-01-01

    One health thinking for health interventions is increasingly being used to capture previously unseen stakeholders and impacts across people, animals, and the environment. The Network for One Health Evaluation (NEOH) proposes a systems-based framework to quantitatively assess integration and highlight the added value (theory of change) that this approach will bring to a project. This case study will retrospectively evaluate the pioneering use of a One Health (OH) approach during an international collaboration (satellite project to tackle production losses due to tick-borne disease in cattle in Southern Zambia in late 1980s). The objective of the evaluation is twofold: retrospective evaluation the OH-ness of the satellite project and identification of costs and benefits. Data for evaluation was recovered from publications, project documents, and witness interviews. A mixed qualitative and quantitative evaluation was undertaken. In this case study, a transdisciplinary approach allowed for the identification of a serious public health risk arising from the unexpected reuse of chemical containers by the local public against advice. Should this pioneering project not have been completed then it is assumed this behavior could have had a large impact on public wellbeing and ultimately reduced regional productivity and compromised welfare. From the economic evaluation, the costs of implementing this OH approach, helping to avoid harm, were small in comparison to overall project costs. The overall OH Index was 0.34. The satellite project demonstrated good OH operations by managing to incorporate the input across multiple dimensions but was slightly weaker on OH infrastructures (OH Ratio = 1.20). These quantitative results can be used in the initial validation and benchmarking of this novel framework. Limitations of the evaluation were mainly a lack of data due to the length of time since project completion and a lack of formal monitoring of program impact. In future health strategy development and execution, routine monitoring and evaluation from an OH perspective (by utilizing the framework proposed by NEOH), could prove valuable or used as a tool for retrospective evaluation of existing policies.

  2. The CARMEN software as a service infrastructure.

    PubMed

    Weeks, Michael; Jessop, Mark; Fletcher, Martyn; Hodge, Victoria; Jackson, Tom; Austin, Jim

    2013-01-28

    The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.

  3. Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resource Mobile Robots

    PubMed Central

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-01-01

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933

  4. Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.

    PubMed

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-10-21

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.

  5. Data for development in health: a case study and monitoring framework from Kazakhstan

    PubMed Central

    Obermann, Konrad; Chanturidze, Tata; Richardson, Erica; Tanirbergenov, Serik; Shoranov, Marat; Nurgozhaev, Ali

    2016-01-01

    Healthcare reforms are often not coupled with a relevant and appropriate monitoring framework, leaving policymakers and the public without evidence about the implications of such reforms. Kazakhstan has embarked on a large-scale reform of its healthcare system in order to achieve Universal Health Coverage. The health-related 2020 Strategic Development Goals reflect this political ambition. In a case-study approach and on the basis of published and unpublished evidence as well as personal involvement and experience (A) the indicators in the 2020 Strategic Development Goals were assessed and (B) a ‘data-mapping’ exercise was conducted, where the WHO health system framework was used to describe the data available at present in Kazakhstan and comment on the different indicators regarding their usefulness for monitoring the current health-related 2020 Strategic Development Goals in Kazakhstan. It was concluded that the country’s current monitoring framework needs further development to track the progress and outcomes of policy implementation. The application of a modified WHO/World Bank/Global Fund health system monitoring framework was suggested to examine the implications of recent health sector reforms. Lessons drawn from the Kazakhstan experience on tailoring the suggested framework, collecting the data, and using the generated intelligence in policy development and decision-making can serve as a useful example for other middle-income countries, potentially enabling them to fast-track developments in the health sector. PMID:28588905

  6. Photo-z-SQL: Photometric redshift estimation framework

    NASA Astrophysics Data System (ADS)

    Beck, Róbert; Dobos, László; Budavári, Tamás; Szalay, Alexander S.; Csabai, István

    2017-04-01

    Photo-z-SQL is a flexible template-based photometric redshift estimation framework that can be seamlessly integrated into a SQL database (or DB) server and executed on demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and uses the computational capabilities of DB hardware. Photo-z-SQL performs both maximum likelihood and Bayesian estimation and handles inputs of variable photometric filter sets and corresponding broad-band magnitudes.

  7. Modeling Criterion Shifts and Target Checking in Prospective Memory Monitoring

    ERIC Educational Resources Information Center

    Horn, Sebastian S.; Bayen, Ute J.

    2015-01-01

    Event-based prospective memory (PM) involves remembering to perform intended actions after a delay. An important theoretical issue is whether and how people monitor the environment to execute an intended action when a target event occurs. Performing a PM task often increases the latencies in ongoing tasks. However, little is known about the…

  8. MCCCD 2016 Monitoring Report: Governing Board Outcomes and Metrics. November 2016

    ERIC Educational Resources Information Center

    Maricopa Community Colleges, 2016

    2016-01-01

    This is the fifth annual Governing Board Monitoring Report that utilizes the Board outcome metrics adopted in 2010 to gauge institutional effectiveness. The Executive Summary focuses primarily on the 11 "Key Metrics." Some general highlights of the report include the following: (1) More students are successfully completing college-level…

  9. Automated water monitor system field demonstration test report. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Brooks, R. L.; Jeffers, E. L.; Perreira, J.; Poel, J. D.; Nibley, D.; Nuss, R. H.

    1981-01-01

    A system that performs water quality monitoring on-line and in real time much as it would be done in a spacecraft, was developed and demonstrated. The system has the capability to determine conformance to high effluent quality standards and to increase the potential for reclamation and reuse of water.

  10. MANAGING TROUBLED WATERS: THE EVOLUTION OF THE EMAP COASTAL MONITORING PROGRAM 2001 EMAP SYMPOSIUM, APRIL 24-27, PENSACOLA BEACH, FL

    EPA Science Inventory

    In 1990, Managing Troubled Waters concluded by stating three primary conclusions and then developing specific recommendations regarding their execution. Using the decade of the 90s, we examine the evolution of the U.S. EPA's Environmental Monitoring and Assessment Program's Coast...

  11. Traffic-Light-Preemption Vehicle-Transponder Software Module

    NASA Technical Reports Server (NTRS)

    Bachelder, Aaron; Foster, Conrad

    2005-01-01

    A prototype wireless data-communication and control system automatically modifies the switching of traffic lights to give priority to emergency vehicles. The system, which was reported in several NASA Tech Briefs articles at earlier stages of development, includes a transponder on each emergency vehicle, a monitoring and control unit (an intersection controller) at each intersection equipped with traffic lights, and a central monitoring subsystem. An essential component of the system is a software module executed by a microcontroller in each transponder. This module integrates and broadcasts data on the position, velocity, acceleration, and emergency status of the vehicle. The position, velocity, and acceleration data are derived partly from the Global Positioning System, partly from deductive reckoning, and partly from a diagnostic computer aboard the vehicle. The software module also monitors similar broadcasts from other vehicles and from intersection controllers, informs the driver of which intersections it controls, and generates visible and audible alerts to inform the driver of any other emergency vehicles that are close enough to create a potential hazard. The execution of the software module can be monitored remotely and the module can be upgraded remotely and, hence, automatically

  12. Atomoxetine effects on executive function as measured by the BRIEF--a in young adults with ADHD: a randomized, double-blind, placebo-controlled study.

    PubMed

    Adler, Lenard A; Clemow, David B; Williams, David W; Durell, Todd M

    2014-01-01

    To evaluate the effect of atomoxetine treatment on executive functions in young adults with attention-deficit/hyperactivity disorder (ADHD). In this Phase 4, multi-center, double-blind, placebo-controlled trial, young adults (18-30 years) with ADHD were randomized to receive atomoxetine (20-50 mg BID, N = 220) or placebo (N = 225) for 12 weeks. The Behavior Rating Inventory of Executive Function-Adult (BRIEF-A) consists of 75 self-report items within 9 nonoverlapping clinical scales measuring various aspects of executive functioning. Mean changes from baseline to 12-week endpoint on the BRIEF-A were analyzed using an ANCOVA model (terms: baseline score, treatment, and investigator). At baseline, there were no significant treatment group differences in the percentage of patients with BRIEF-A composite or index T-scores ≥60 (p>.5), with over 92% of patients having composite scores ≥60 (≥60 deemed clinically meaningful for these analyses). At endpoint, statistically significantly greater mean reductions were seen in the atomoxetine versus placebo group for the BRIEF-A Global Executive Composite (GEC), Behavioral Regulation Index (BRI), and Metacognitive Index (MI) scores, as well as the Inhibit, Self-Monitor, Working Memory, Plan/Organize and Task Monitor subscale scores (p<.05), with decreases in scores signifying improvements in executive functioning. Changes in the BRIEF-A Initiate (p = .051), Organization of Materials (p = .051), Shift (p = .090), and Emotional Control (p = .219) subscale scores were not statistically significant. In addition, the validity scales: Inconsistency (p = .644), Infrequency (p = .097), and Negativity (p = .456) were not statistically significant, showing scale validity. Statistically significantly greater improvement in executive function was observed in young adults with ADHD in the atomoxetine versus placebo group as measured by changes in the BRIEF-A scales. ClinicalTrials.gov NCT00510276.

  13. Atomoxetine Effects on Executive Function as Measured by the BRIEF-A in Young Adults with ADHD: A Randomized, Double-Blind, Placebo-Controlled Study

    PubMed Central

    Adler, Lenard A.; Clemow, David B.; Williams, David W.; Durell, Todd M.

    2014-01-01

    Objective To evaluate the effect of atomoxetine treatment on executive functions in young adults with attention-deficit/hyperactivity disorder (ADHD). Methods In this Phase 4, multi-center, double-blind, placebo-controlled trial, young adults (18–30 years) with ADHD were randomized to receive atomoxetine (20–50 mg BID, N = 220) or placebo (N = 225) for 12 weeks. The Behavior Rating Inventory of Executive Function-Adult (BRIEF-A) consists of 75 self-report items within 9 nonoverlapping clinical scales measuring various aspects of executive functioning. Mean changes from baseline to 12-week endpoint on the BRIEF-A were analyzed using an ANCOVA model (terms: baseline score, treatment, and investigator). Results At baseline, there were no significant treatment group differences in the percentage of patients with BRIEF-A composite or index T-scores ≥60 (p>.5), with over 92% of patients having composite scores ≥60 (≥60 deemed clinically meaningful for these analyses). At endpoint, statistically significantly greater mean reductions were seen in the atomoxetine versus placebo group for the BRIEF-A Global Executive Composite (GEC), Behavioral Regulation Index (BRI), and Metacognitive Index (MI) scores, as well as the Inhibit, Self-Monitor, Working Memory, Plan/Organize and Task Monitor subscale scores (p<.05), with decreases in scores signifying improvements in executive functioning. Changes in the BRIEF-A Initiate (p = .051), Organization of Materials (p = .051), Shift (p = .090), and Emotional Control (p = .219) subscale scores were not statistically significant. In addition, the validity scales: Inconsistency (p = .644), Infrequency (p = .097), and Negativity (p = .456) were not statistically significant, showing scale validity. Conclusion Statistically significantly greater improvement in executive function was observed in young adults with ADHD in the atomoxetine versus placebo group as measured by changes in the BRIEF-A scales. Trial Registration ClinicalTrials.gov NCT00510276 PMID:25148243

  14. Advanced Air Traffic Management System Study : Executive Summary

    DOT National Transportation Integrated Search

    1975-01-01

    This report summarizes the U.S. Department of Transportation study and development plans for the air traffic management system of the late 1980's and beyond. The plans are presented in the framework of an evolutionary system concept of traffic manage...

  15. Final analysis of cost, value and risk : executive summary.

    DOT National Transportation Integrated Search

    2009-03-05

    The U.S. Department of Transportation (USDOT) has taken a leadership position in assessing : Next Generation 9-1-1 (NG9-1-1) technologies and the development of a framework for national : deployment. USDOT understands that access to emergency service...

  16. 5 CFR 250.202 - Office of Personnel Management responsibilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Governmentwide leadership and direction in the strategic management of the Federal workforce. (b) To execute this critical leadership responsibility, OPM adopts the Human Capital Assessment and Accountability Framework... agency's mission-critical occupations; ensuring leadership continuity through the implementation of...

  17. 5 CFR 430.303 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., including customers and employees. Critical element means a key component of an executive's work that.... Performance management system means the framework of policies and practices that an agency establishes under..., developing, evaluating, and rewarding both individual and organizational performance and for using resulting...

  18. 5 CFR 430.303 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., including customers and employees. Critical element means a key component of an executive's work that.... Performance management system means the framework of policies and practices that an agency establishes under..., developing, evaluating, and rewarding both individual and organizational performance and for using resulting...

  19. 5 CFR 430.303 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., including customers and employees. Critical element means a key component of an executive's work that.... Performance management system means the framework of policies and practices that an agency establishes under..., developing, evaluating, and rewarding both individual and organizational performance and for using resulting...

  20. 5 CFR 250.202 - Office of Personnel Management responsibilities.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Governmentwide leadership and direction in the strategic management of the Federal workforce. (b) To execute this critical leadership responsibility, OPM adopts the Human Capital Assessment and Accountability Framework... agency's mission-critical occupations; ensuring leadership continuity through the implementation of...

  1. 5 CFR 250.202 - Office of Personnel Management responsibilities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Governmentwide leadership and direction in the strategic management of the Federal workforce. (b) To execute this critical leadership responsibility, OPM adopts the Human Capital Assessment and Accountability Framework... agency's mission-critical occupations; ensuring leadership continuity through the implementation of...

  2. 5 CFR 250.202 - Office of Personnel Management responsibilities.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Governmentwide leadership and direction in the strategic management of the Federal workforce. (b) To execute this critical leadership responsibility, OPM adopts the Human Capital Assessment and Accountability Framework... agency's mission-critical occupations; ensuring leadership continuity through the implementation of...

  3. 5 CFR 250.202 - Office of Personnel Management responsibilities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Governmentwide leadership and direction in the strategic management of the Federal workforce. (b) To execute this critical leadership responsibility, OPM adopts the Human Capital Assessment and Accountability Framework... agency's mission-critical occupations; ensuring leadership continuity through the implementation of...

  4. Automatic Human Movement Assessment With Switching Linear Dynamic System: Motion Segmentation and Motor Performance.

    PubMed

    de Souza Baptista, Roberto; Bo, Antonio P L; Hayashibe, Mitsuhiro

    2017-06-01

    Performance assessment of human movement is critical in diagnosis and motor-control rehabilitation. Recent developments in portable sensor technology enable clinicians to measure spatiotemporal aspects to aid in the neurological assessment. However, the extraction of quantitative information from such measurements is usually done manually through visual inspection. This paper presents a novel framework for automatic human movement assessment that executes segmentation and motor performance parameter extraction in time-series of measurements from a sequence of human movements. We use the elements of a Switching Linear Dynamic System model as building blocks to translate formal definitions and procedures from human movement analysis. Our approach provides a method for users with no expertise in signal processing to create models for movements using labeled dataset and later use it for automatic assessment. We validated our framework on preliminary tests involving six healthy adult subjects that executed common movements in functional tests and rehabilitation exercise sessions, such as sit-to-stand and lateral elevation of the arms and five elderly subjects, two of which with limited mobility, that executed the sit-to-stand movement. The proposed method worked on random motion sequences for the dual purpose of movement segmentation (accuracy of 72%-100%) and motor performance assessment (mean error of 0%-12%).

  5. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    NASA Astrophysics Data System (ADS)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  6. Augmenting team cognition in human-automation teams performing in complex operational environments.

    PubMed

    Cuevas, Haydee M; Fiore, Stephen M; Caldwell, Barrett S; Strater, Laura

    2007-05-01

    There is a growing reliance on automation (e.g., intelligent agents, semi-autonomous robotic systems) to effectively execute increasingly cognitively complex tasks. Successful team performance for such tasks has become even more dependent on team cognition, addressing both human-human and human-automation teams. Team cognition can be viewed as the binding mechanism that produces coordinated behavior within experienced teams, emerging from the interplay between each team member's individual cognition and team process behaviors (e.g., coordination, communication). In order to better understand team cognition in human-automation teams, team performance models need to address issues surrounding the effect of human-agent and human-robot interaction on critical team processes such as coordination and communication. Toward this end, we present a preliminary theoretical framework illustrating how the design and implementation of automation technology may influence team cognition and team coordination in complex operational environments. Integrating constructs from organizational and cognitive science, our proposed framework outlines how information exchange and updating between humans and automation technology may affect lower-level (e.g., working memory) and higher-level (e.g., sense making) cognitive processes as well as teams' higher-order "metacognitive" processes (e.g., performance monitoring). Issues surrounding human-automation interaction are discussed and implications are presented within the context of designing automation technology to improve task performance in human-automation teams.

  7. Analysis of data characterizing tide and current fluxes in coastal basins

    NASA Astrophysics Data System (ADS)

    Armenio, Elvira; De Serio, Francesca; Mossa, Michele

    2017-07-01

    Many coastal monitoring programmes have been carried out to investigate in situ hydrodynamic patterns and correlated physical processes, such as sediment transport or spreading of pollutants. The key point is the necessity to transform this growing amount of data provided by marine sensors into information for users. The present paper aims to outline that it is possible to recognize the recurring and typical hydrodynamic processes of a coastal basin, by conveniently processing some selected marine field data. The illustrated framework is made up of two steps. Firstly, a sequence of analysis with classic methods characterized by low computational cost was executed in both time and frequency domains on detailed field measurements of waves, tides, and currents. After this, some indicators of the hydrodynamic state of the basin were identified and evaluated. Namely, the assessment of the net flow through a connecting channel, the time delay of current peaks between upper and bottom layers, the ratio of peak ebb and peak flood currents and the tidal asymmetry factor exemplify results on the vertical structure of the flow, on the correlation between currents and tide and flood/ebb dominance. To demonstrate how this simple and generic framework could be applied, a case study is presented, referring to Mar Piccolo, a shallow water basin located in the inner part of the Ionian Sea (southern Italy).

  8. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1991-01-01

    Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.

  9. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  10. National protocol framework for the inventory and monitoring of bees

    USGS Publications Warehouse

    Droege, Sam; Engler, Joseph D.; Sellers, Elizabeth A.; Lee O'Brien,

    2016-01-01

    This national protocol framework is a standardized tool for the inventory and monitoring of the approximately 4,200 species of native and non-native bee species that may be found within the National Wildlife Refuge System (NWRS) administered by the U.S. Fish and Wildlife Service (USFWS). However, this protocol framework may also be used by other organizations and individuals to monitor bees in any given habitat or location. Our goal is to provide USFWS stations within the NWRS (NWRS stations are land units managed by the USFWS such as national wildlife refuges, national fish hatcheries, wetland management districts, conservation areas, leased lands, etc.) with techniques for developing an initial baseline inventory of what bee species are present on their lands and to provide an inexpensive, simple technique for monitoring bees continuously and for monitoring and evaluating long-term population trends and management impacts. The latter long-term monitoring technique requires a minimal time burden for the individual station, yet can provide a good statistical sample of changing populations that can be investigated at the station, regional, and national levels within the USFWS’ jurisdiction, and compared to other sites within the United States and Canada. This protocol framework was developed in cooperation with the United States Geological Survey (USGS), the USFWS, and a worldwide network of bee researchers who have investigated the techniques and methods for capturing bees and tracking population changes. The protocol framework evolved from field and lab-based investigations at the USGS Bee Inventory and Monitoring Laboratory at the Patuxent Wildlife Research Center in Beltsville, Maryland starting in 2002 and was refined by a large number of USFWS, academic, and state groups. It includes a Protocol Introduction and a set of 8 Standard Operating Procedures or SOPs and adheres to national standards of protocol content and organization. The Protocol Narrative describes the history and need for the protocol framework and summarizes the basic elements of objectives, sampling design, field methods, training, data management, analysis, and reporting. The SOPs provide more detail and specific instructions for implementing the protocol framework. A central database, for managing all the resulting data is under development. We welcome use of this protocol framework by our partners, as appropriate for their bee inventory and monitoring objectives.

  11. Imitation and observational learning of hand actions: prefrontal involvement and connectivity.

    PubMed

    Higuchi, S; Holle, H; Roberts, N; Eickhoff, S B; Vogt, S

    2012-01-16

    The first aim of this event-related fMRI study was to identify the neural circuits involved in imitation learning. We used a rapid imitation task where participants directly imitated pictures of guitar chords. The results provide clear evidence for the involvement of dorsolateral prefrontal cortex, as well as the fronto-parietal mirror circuit (FPMC) during action imitation when the requirements for working memory are low. Connectivity analyses further indicated a robust connectivity between left prefrontal cortex and the components of the FPMC bilaterally. We conclude that a mechanism of automatic perception-action matching alone is insufficient to account for imitation learning. Rather, the motor representation of an observed, complex action, as provided by the FPMC, only serves as the 'raw material' for higher-order supervisory and monitoring operations associated with the prefrontal cortex. The second aim of this study was to assess whether these neural circuits are also recruited during observational practice (OP, without motor execution), or only during physical practice (PP). Whereas prefrontal cortex was not consistently activated in action observation across all participants, prefrontal activation intensities did predict the behavioural practice effects, thus indicating a crucial role of prefrontal cortex also in OP. In addition, whilst OP and PP produced similar activation intensities in the FPMC when assessed during action observation, during imitative execution, the practice-related activation decreases were significantly more pronounced for PP than for OP. This dissociation indicates a lack of execution-related resources in observationally practised actions. More specifically, we found neural efficiency effects in the right motor cingulate-basal ganglia circuit and the FPMC that were only observed after PP but not after OP. Finally, we confirmed that practice generally induced activation decreases in the FPMC during both action observation and imitation sessions and outline a framework explaining the discrepant findings in the literature. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Controller's role in monitoring prospective payment system.

    PubMed

    Margrif, F D

    1986-05-01

    The challenge for hospital controllers in overseeing the prospective payment system (PPS) lies not in acquiring technical expertise but in working with the chief executive officer to coordinate organizational change. Specifically, the controller should assist in creating a prospective payment committee (PPC)--an interdisciplinary group of executives, middle managers, and medical staff. The PPC's duties, among others, include educating staff about the PPS, development of a productivity reporting system, and review of the responsibility accounting structure.

  13. The Monitoring, Detection, Isolation and Assessment of Information Warfare Attacks Through Multi-Level, Multi-Scale System Modeling and Model Based Technology

    DTIC Science & Technology

    2004-01-01

    login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a

  14. A Socio-Economic and Environmental Information Needs Knowledge Base (SEE-IN KB) in Support of SDG Implementation and Monitoring

    NASA Astrophysics Data System (ADS)

    Plag, H. P.; Jules-Plag, S.

    2016-12-01

    The UN Agenda 2030 has seventeen Sustainable Development Goals (SDGs) to be reach by 2030, which are detailed in 170 Targets. A monitioring framework of 240 SDG Indicators provides the metrics to measure progress towards these targets. The SDG Indicators are report cards for the progress towards the targets and a measure to assess potential impacts of policies and other means in support of SDG implementation. The Socio-Economic and Environmental Information Needs Knowledge Base (SEE-IN KB) collects information on objects such as user types, applications, observational requirements, a number of needs, societal goals and targets, indicators and indices, models, services, and datasets, as well as the interconnections between these objects, including links to Essential Variables (EVs). This enables gap analyses, prioritizations of Earth observations, and discovery of products and services meeting the information needs. "What if?" questions supports knowledge creation supporting the development of policies and activities to make progress towards the SDGs. Increasingly, user types, applications and requirements are linked to actual persons, models and datasets, respectively, and this allows both the social networking of providers and users and the execution of business processes. A core function of the SEE-IN KB is to facilitate the linkage of societal goals, targets, and indicators to EVs that need to be monitored in order to measure progress towards the targets. Applying a goal-based approach used to identify the EVs to the SDG Indicators revealed that some SDG Indicators require traditional Earth observations for quantification, while many of the EVs are related to the built environment. For many of the SDG Indicators, integration of socio-economic statistical data with environmental data, including in situ observations, is of importance. The goal-based approach was also applied to the SDG Targets, and this analysis showed that many of the Targets would benefit from additional indicators that are directly related to the environment. Many of the more environmentally focused indicators would require in situ data for quantification. A revision of the monitoring framework could take these findings into account and account for the linkage of the socio-economic and environmental aspect reflected in the SDGs.

  15. Risk intelligence: making profit from uncertainty in data processing system.

    PubMed

    Zheng, Si; Liao, Xiangke; Liu, Xiaodong

    2014-01-01

    In extreme scale data processing systems, fault tolerance is an essential and indispensable part. Proactive fault tolerance scheme (such as the speculative execution in MapReduce framework) is introduced to dramatically improve the response time of job executions when the failure becomes a norm rather than an exception. Efficient proactive fault tolerance schemes require precise knowledge on the task executions, which has been an open challenge for decades. To well address the issue, in this paper we design and implement RiskI, a profile-based prediction algorithm in conjunction with a riskaware task assignment algorithm, to accelerate task executions, taking the uncertainty nature of tasks into account. Our design demonstrates that the nature uncertainty brings not only great challenges, but also new opportunities. With a careful design, we can benefit from such uncertainties. We implement the idea in Hadoop 0.21.0 systems and the experimental results show that, compared with the traditional LATE algorithm, the response time can be improved by 46% with the same system throughput.

  16. Automatic-heuristic and executive-analytic processing during reasoning: Chronometric and dual-task considerations.

    PubMed

    De Neys, Wim

    2006-06-01

    Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).

  17. Risk Intelligence: Making Profit from Uncertainty in Data Processing System

    PubMed Central

    Liao, Xiangke; Liu, Xiaodong

    2014-01-01

    In extreme scale data processing systems, fault tolerance is an essential and indispensable part. Proactive fault tolerance scheme (such as the speculative execution in MapReduce framework) is introduced to dramatically improve the response time of job executions when the failure becomes a norm rather than an exception. Efficient proactive fault tolerance schemes require precise knowledge on the task executions, which has been an open challenge for decades. To well address the issue, in this paper we design and implement RiskI, a profile-based prediction algorithm in conjunction with a riskaware task assignment algorithm, to accelerate task executions, taking the uncertainty nature of tasks into account. Our design demonstrates that the nature uncertainty brings not only great challenges, but also new opportunities. With a careful design, we can benefit from such uncertainties. We implement the idea in Hadoop 0.21.0 systems and the experimental results show that, compared with the traditional LATE algorithm, the response time can be improved by 46% with the same system throughput. PMID:24883392

  18. Self-unawareness of levodopa induced dyskinesias in patients with Parkinson's disease.

    PubMed

    Amanzio, Martina; Palermo, Sara; Zibetti, Maurizio; Leotta, Daniela; Rosato, Rosalba; Geminiani, Giuliano; Lopiano, Leonardo

    2014-10-01

    The study analyzes the presence of dyskinesias-reduced-self-awareness in forty-eight patients suffering from Parkinson's disease (PD). As the association with executive dysfunction is a matter of debate and we hypothesize it plays an important role in dyskinesias self-unawareness, we analyzed the role of dopaminergic treatment on the medial-prefrontal-ventral-striatal circuitry using a neurocognitive approach. Special attention was given to metacognitive abilities related to action-monitoring that represent a novel explanation of the phenomenon. PD patients were assessed using different rating scales that we devised to measure movement awareness disorders. In order to ascertain whether each variable measured at a cognitive-clinical level contributes to predicting the scores of the movement-disorder-awareness-scales, we conducted multiple logistic regression models using the latter as binary dependent variables. We used the Wisconsin Card Sorting Test-metacognitive-version to assess the executive functions of the prefrontal-ventral-striatal circuitry. Data showed that a reduction of self-awareness using the Dyskinesia rating scale was associated with global monitoring (p=.04), monitoring resolution (p=.04) and control sensitivity (p=.04). Patients failed to perceive their performance, distinguish between correct and incorrect sorts, be confident in their choice and consequently decide to gamble during the task. We did not find any association with executive functions using the hypo-bradykinesia rating scale. Our findings indicate that when the comparator mechanism for monitoring attentive performance is compromised at a prefrontal striatal level, patients lose the ability to recognize their motor disturbances that do not achieve conscious awareness. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Longitudinal Associations Between Parental Bonding, Parenting Stress, and Executive Functioning in Toddlerhood.

    PubMed

    de Cock, Evi S A; Henrichs, Jens; Klimstra, Theo A; Janneke B M Maas, A; Vreeswijk, Charlotte M J M; Meeus, Wim H J; van Bakel, Hedwig J A

    2017-01-01

    Early executive functioning is an important predictor for future development of children's cognitive skills and behavioral outcomes. Parenting behavior has proven to be a key environmental determinant of child executive functioning. However, the association of parental affect and cognitions directed to the child with child executive functioning has been understudied. Therefore, in the present study we examine the associations between parental bonding (i.e., the affective tie from parent to child), parenting stress, and child executive functioning. At 26 weeks of pregnancy, and at 6 months and 24 months postpartum the quality of the maternal (N = 335) and paternal (N = 261) bond with the infant was assessed. At 24 months, postnatal parenting stress and child executive functioning were measured by means of parent-report questionnaires. Results indicated that for both mothers and fathers feelings of bonding negatively predicted experienced parenting stress over time. In addition, for both parents a negative indirect effect of bonding on child executive functioning problems was found via experienced parenting stress. These findings indicate the importance of monitoring parents who experience a low level and quality of early parent-child bonding, as this makes them vulnerable to parenting stress, consequently putting their children at risk for developing executive functioning problems.

  20. An Integrating Framework for Interdisciplinary Military Analyses

    DTIC Science & Technology

    2017-04-01

    Effectiveness, System Performance, Task Prosecution, War Gaming 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES...and space for every play of the game . Called plays can be compared to collective tasks with each player responsible for executing one or more

  1. Fenix, A Fault Tolerant Programming Framework for MPI Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamel, Marc; Teranihi, Keita; Valenzuela, Eric

    2016-10-05

    Fenix provides APIs to allow the users to add fault tolerance capability to MPI-based parallel programs in a transparent manner. Fenix-enabled programs can run through process failures during program execution using a pool of spare processes accommodated by Fenix.

  2. A Framework for Global Electronic Commerce: An Executive Summary.

    ERIC Educational Resources Information Center

    Office of the Press Secretary of the White House

    1997-01-01

    An abbreviated version of a longer policy document on electronic commerce released by the Clinton Administration, this article examines principles and recommendations on tariffs, taxes, electronic payment systems, uniform commercial code for electronic commerce, intellectual property protection, privacy, security, telecommunications infrastructure…

  3. Online SVT Commissioning and Monitoring using a Service-Oriented Architecture Framework

    NASA Astrophysics Data System (ADS)

    Ruger, Justin; Gotra, Yuri; Weygand, Dennis; Ziegler, Veronique; Heddle, David; Gore, David

    2014-03-01

    Silicon Vertex Tracker detectors are devices used in high energy experiments for precision measurement of charged tracks close to the collision point. Early detection of faulty hardware is essential and therefore code development of monitoring and commissioning software is essential. The computing framework for the CLAS12 experiment at Jefferson Lab is a service-oriented architecture that allows efficient data-flow from one service to another through loose coupling. I will present the strategy and development of services for the CLAS12 Silicon Tracker data monitoring and commissioning within this framework, as well as preliminary results using test data.

  4. [Prematurity: longitudinal analysis of executive functions].

    PubMed

    Sastre-Riba, S

    2009-02-27

    Understanding cognitive development requires an interdisciplinary and neuropsychological approach. Executive functions facilitates cognitive activity and they are related to progressive cerebral configuration during pregnancy and infancy. One of the aims of the actual neuropsychology is the ontogeny of executive functions and their capacity to explain differential and normative developmental trends, specially because of its consequences on mental flexibility, monitoring, planning and cognitive control; they are also essential for good performance at school. The incidence of developmental risk factors as prematurity could affect long-term executive functioning expressed in learning difficulties or behavioral control. We studied, comparatively and longitudinally, the individual activity on objects displayed by typical babies (n = 25), and preterm babies (n = 10) from 1.5 to 2 years-old. Applying systematic observational methodology, spontaneous babies' activity is registered. Double intra and inter-group analysis compare the data from the resolution of a non-verbal task through a multifaceted design. Results obtained show us differential pattern of early executive functioning among the groups studied. The growth of executive functioning is showed, too, through the ages studied for every group.

  5. Model Based Analysis and Test Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  6. ControlShell - A real-time software framework

    NASA Technical Reports Server (NTRS)

    Schneider, Stanley A.; Ullman, Marc A.; Chen, Vincent W.

    1991-01-01

    ControlShell is designed to enable modular design and impplementation of real-time software. It is an object-oriented tool-set for real-time software system programming. It provides a series of execution and data interchange mechansims that form a framework for building real-time applications. These mechanisms allow a component-based approach to real-time software generation and mangement. By defining a set of interface specifications for intermodule interaction, ControlShell provides a common platform that is the basis for real-time code development and exchange.

  7. A National Level Engagement Strategy: A Framework for Action

    DTIC Science & Technology

    2012-05-15

    engagement framework that integrates all instruments of natio nal po wer to focus iocrea~ ing l y limitetl resources LO meet the most s ignificant national...ami rc!o.pOn:-.ibilitie:-, to imph::mcnl and execute the nu!ional engagement

  8. A Security Architecture Based on Trust Management for Pervasive Computing Systems

    DTIC Science & Technology

    2005-01-01

    SmartSpace framework, we extended the C2 [16] ar- chitecture, which in turn is based on the Centaurus [10] model. In Centaurus a Client can access...the services provided by the nearest Centaurus Service Manager (SM) via some short-range communi- cation. The SM acts as an active proxy by executing...The In the Centaurus project [10], the main design goal is the development of a framework for building portals to services using various types of

  9. A reusable rocket engine intelligen control

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Lorenzo, Carl F.

    1988-01-01

    An intelligent control system for reusable space propulsion systems for future launch vehicles is described. The system description includes a framework for the design. The framework consists of an execution level with high-speed control and diagnostics, and a coordination level which marries expert system concepts with traditional control. A comparison is made between air breathing and rocket engine control concepts to assess the relative levels of development and to determine the applicability of air breathing control concepts to future reusable rocket engine systems.

  10. A reusable rocket engine intelligent control

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Lorenzo, Carl F.

    1988-01-01

    An intelligent control system for reusable space propulsion systems for future launch vehicles is described. The system description includes a framework for the design. The framework consists of an execution level with high-speed control and diagnostics, and a coordination level which marries expert system concepts with traditional control. A comparison is made between air breathing and rocket engine control concepts to assess the relative levels of development and to determine the applicability of air breathing control concepts ot future reusable rocket engine systems.

  11. Engineering Analysis Using a Web-based Protocol

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.; Claus, Russell W.

    2002-01-01

    This paper reviews the development of a web-based framework for engineering analysis. A one-dimensional, high-speed analysis code called LAPIN was used in this study, but the approach can be generalized to any engineering analysis tool. The web-based framework enables users to store, retrieve, and execute an engineering analysis from a standard web-browser. We review the encapsulation of the engineering data into the eXtensible Markup Language (XML) and various design considerations in the storage and retrieval of application data.

  12. Cortico-basal ganglia networks subserving goal-directed behavior mediated by conditional visuo-goal association

    PubMed Central

    Hoshi, Eiji

    2013-01-01

    Action is often executed according to information provided by a visual signal. As this type of behavior integrates two distinct neural representations, perception and action, it has been thought that identification of the neural mechanisms underlying this process will yield deeper insights into the principles underpinning goal-directed behavior. Based on a framework derived from conditional visuomotor association, prior studies have identified neural mechanisms in the dorsal premotor cortex (PMd), dorsolateral prefrontal cortex (dlPFC), ventrolateral prefrontal cortex (vlPFC), and basal ganglia (BG). However, applications resting solely on this conceptualization encounter problems related to generalization and flexibility, essential processes in executive function, because the association mode involves a direct one-to-one mapping of each visual signal onto a particular action. To overcome this problem, we extend this conceptualization and postulate a more general framework, conditional visuo-goal association. According to this new framework, the visual signal identifies an abstract behavioral goal, and an action is subsequently selected and executed to meet this goal. Neuronal activity recorded from the four key areas of the brains of monkeys performing a task involving conditional visuo-goal association revealed three major mechanisms underlying this process. First, visual-object signals are represented primarily in the vlPFC and BG. Second, all four areas are involved in initially determining the goals based on the visual signals, with the PMd and dlPFC playing major roles in maintaining the salience of the goals. Third, the cortical areas play major roles in specifying action, whereas the role of the BG in this process is restrictive. These new lines of evidence reveal that the four areas involved in conditional visuomotor association contribute to goal-directed behavior mediated by conditional visuo-goal association in an area-dependent manner. PMID:24155692

  13. Reconfigurable Model Execution in the OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Hwang, John T.

    2017-01-01

    NASA's OpenMDAO framework facilitates constructing complex models and computing their derivatives for multidisciplinary design optimization. Decomposing a model into components that follow a prescribed interface enables OpenMDAO to assemble multidisciplinary derivatives from the component derivatives using what amounts to the adjoint method, direct method, chain rule, global sensitivity equations, or any combination thereof, using the MAUD architecture. OpenMDAO also handles the distribution of processors among the disciplines by hierarchically grouping the components, and it automates the data transfer between components that are on different processors. These features have made OpenMDAO useful for applications in aircraft design, satellite design, wind turbine design, and aircraft engine design, among others. This paper presents new algorithms for OpenMDAO that enable reconfigurable model execution. This concept refers to dynamically changing, during execution, one or more of: the variable sizes, solution algorithm, parallel load balancing, or set of variables-i.e., adding and removing components, perhaps to switch to a higher-fidelity sub-model. Any component can reconfigure at any point, even when running in parallel with other components, and the reconfiguration algorithm presented here performs the synchronized updates to all other components that are affected. A reconfigurable software framework for multidisciplinary design optimization enables new adaptive solvers, adaptive parallelization, and new applications such as gradient-based optimization with overset flow solvers and adaptive mesh refinement. Benchmarking results demonstrate the time savings for reconfiguration compared to setting up the model again from scratch, which can be significant in large-scale problems. Additionally, the new reconfigurability feature is applied to a mission profile optimization problem for commercial aircraft where both the parametrization of the mission profile and the time discretization are adaptively refined, resulting in computational savings of roughly 10% and the elimination of oscillations in the optimized altitude profile.

  14. Source Monitoring and False Memories in Children: Relation to Certainty and Executive Functioning.

    ERIC Educational Resources Information Center

    Ruffman, Ted; Rustin, Charlotte; Garnham, Wendy; Parkin, Alan J.

    2001-01-01

    Examined source monitoring and false memories in 6-, 8-, and 10-year-olds related to their memory of information presented by videotape and/or audiotape. Found that certainty rating revealed deficits in children's understanding of when they had erred on source questions and when they had made false alarms. Inhibitory ability accounted for unique…

  15. Metamemory monitoring in mild cognitive impairment: Evidence of a less accurate episodic feeling-of-knowing.

    PubMed

    Perrotin, Audrey; Belleville, Sylvie; Isingrini, Michel

    2007-09-20

    This study aimed at exploring metamemory and specifically the accuracy of memory monitoring in mild cognitive impairment (MCI) using an episodic memory feeling-of-knowing (FOK) procedure. To this end, 20 people with MCI and 20 matched control participants were compared on the episodic FOK task. Results showed that the MCI group made less accurate FOK predictions than the control group by overestimating their memory performance on a recognition task. The MCI overestimation behavior was found to be critically related to the severity of their cognitive decline. In the light of recent neuroanatomical models showing the involvement of a temporal-frontal network underlying accurate FOK predictions, the role of memory and executive processes was evaluated. Thus, participants were also administered memory and executive neuropsychological tests. Correlation analysis revealed a between-group differential pattern indicating that FOK accuracy was primarily related to memory abilities in people with MCI, whereas it was specifically related to executive functioning in control participants. The lesser ability of people with MCI to assess their memory status accurately on an episodic FOK task is discussed in relation to both their subjective memory complaints and to their actual memory deficits which might be mediated by the brain vulnerability of their hippocampus and medial temporal system. It is suggested that their memory weakness may lead people with MCI to use other less reliable forms of memory monitoring.

  16. Reflections on a vision for integrated research and monitoring after 15 years

    USGS Publications Warehouse

    Murdoch, Peter S.; McHale, Michael; Baron, Jill S.

    2014-01-01

    In May of 1998, Owen Bricker and his co-author Michael Ruggiero introduced a conceptual design for integrating the Nation’s environmental research and monitoring programs. The Framework for Integrated Monitoring and Related Research was an organizing strategy for relating data collected by various programs, at multiple spatial and temporal scales, and by multiple science disciplines to solve complex ecological issues that individual research or monitoring programs were not designed to address. The concept nested existing intensive monitoring and research stations within national and regional surveys, remotely sensed data, and inventories to produce a collaborative program for multi-scale, multi-network integrated environmental monitoring and research. Analyses of gaps in data needed for specific issues would drive decisions on network improvements or enhancements. Data contributions to the Framework from existing networks would help indicate critical research and monitoring programs to protect during budget reductions. Significant progress has been made since 1998 on refining the Framework strategy. Methods and models for projecting scientific information across spatial and temporal scales have been improved, and a few regional pilots of multi-scale data-integration concepts have been attempted. The links between science and decision-making are also slowly improving and being incorporated into science practice. Experiments with the Framework strategy since 1998 have revealed the foundational elements essential to its successful implementation, such as defining core measurements, establishing standards of data collection and management, integrating research and long-term monitoring, and describing baseline ecological conditions. They have also shown us the remaining challenges to establishing the Framework concept: protecting and enhancing critical long-term monitoring, filling gaps in measurement methods, improving science for decision support, and integrating the disparate integrated science efforts now underway. In the 15 years since the Bricker and Ruggiero (Ecol Appl 8(2):326–329, 1998) paper challenged us with a new paradigm for bringing sound and comprehensive science to environmental decisions, the scientific community can take pride in the progress that has been made, while also taking stock of the challenges ahead for completing the Framework vision.

  17. System control of an autonomous planetary mobile spacecraft

    NASA Technical Reports Server (NTRS)

    Dias, William C.; Zimmerman, Barbara A.

    1990-01-01

    The goal is to suggest the scheduling and control functions necessary for accomplishing mission objectives of a fairly autonomous interplanetary mobile spacecraft, while maximizing reliability. Goals are to provide an extensible, reliable system conservative in its use of on-board resources, while getting full value from subsystem autonomy, and avoiding the lure of ground micromanagement. A functional layout consisting of four basic elements is proposed: GROUND and SYSTEM EXECUTIVE system functions and RESOURCE CONTROL and ACTIVITY MANAGER subsystem functions. The system executive includes six subfunctions: SYSTEM MANAGER, SYSTEM FAULT PROTECTION, PLANNER, SCHEDULE ADAPTER, EVENT MONITOR and RESOURCE MONITOR. The full configuration is needed for autonomous operation on Moon or Mars, whereas a reduced version without the planning, schedule adaption and event monitoring functions could be appropriate for lower-autonomy use on the Moon. An implementation concept is suggested which is conservative in use of system resources and consists of modules combined with a network communications fabric. A language concept termed a scheduling calculus for rapidly performing essential on-board schedule adaption functions is introduced.

  18. A wirelessly programmable actuation and sensing system for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Long, James; Büyüköztürk, Oral

    2016-04-01

    Wireless sensor networks promise to deliver low cost, low power and massively distributed systems for structural health monitoring. A key component of these systems, particularly when sampling rates are high, is the capability to process data within the network. Although progress has been made towards this vision, it remains a difficult task to develop and program 'smart' wireless sensing applications. In this paper we present a system which allows data acquisition and computational tasks to be specified in Python, a high level programming language, and executed within the sensor network. Key features of this system include the ability to execute custom application code without firmware updates, to run multiple users' requests concurrently and to conserve power through adjustable sleep settings. Specific examples of sensor node tasks are given to demonstrate the features of this system in the context of structural health monitoring. The system comprises of individual firmware for nodes in the wireless sensor network, and a gateway server and web application through which users can remotely submit their requests.

  19. Gender differences in multitasking reflect spatial ability.

    PubMed

    Mäntylä, Timo

    2013-04-01

    Demands involving the scheduling and interleaving of multiple activities have become increasingly prevalent, especially for women in both their paid and unpaid work hours. Despite the ubiquity of everyday requirements to multitask, individual and gender-related differences in multitasking have gained minimal attention in past research. In two experiments, participants completed a multitasking session with four gender-fair monitoring tasks and separate tasks measuring executive functioning (working memory updating) and spatial ability (mental rotation). In both experiments, males outperformed females in monitoring accuracy. Individual differences in executive functioning and spatial ability were independent predictors of monitoring accuracy, but only spatial ability mediated gender differences in multitasking. Menstrual changes accentuated these effects, such that gender differences in multitasking (and spatial ability) were eliminated between males and females who were in the menstrual phase of the menstrual cycle but not between males and females who were in the luteal phase. These findings suggest that multitasking involves spatiotemporal task coordination and that gender differences in multiple-task performance reflect differences in spatial ability.

  20. Modular Autonomous Systems Technology Framework: A Distributed Solution for System Monitoring and Control

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Claunch, Charles; Mathis, Frank

    2017-01-01

    The Modular Autonomous Systems Technology (MAST) framework is a tool for building distributed, hierarchical autonomous systems. Originally intended for the autonomous monitoring and control of spacecraft, this framework concept provides support for variable autonomy, assume-guarantee contracts, and efficient communication between subsystems and a centralized systems manager. MAST was developed at NASA's Johnson Space Center (JSC) and has been applied to an integrated spacecraft example scenario.

  1. Event-Related Potentials in a Cued Go-NoGo Task Associated with Executive Functions in Adolescents with Autism Spectrum Disorder; A Case-Control Study.

    PubMed

    Høyland, Anne L; Øgrim, Geir; Lydersen, Stian; Hope, Sigrun; Engstrøm, Morten; Torske, Tonje; Nærland, Terje; Andreassen, Ole A

    2017-01-01

    Executive functions are often affected in autism spectrum disorders (ASD). The underlying biology is however not well known. In the DSM-5, ASD is characterized by difficulties in two domains: Social Interaction and Repetitive and Restricted Behavior, RRB. Insistence of Sameness is part of RRB and has been reported related to executive functions. We aimed to identify differences between ASD and typically developing (TD) adolescents in Event Related Potentials (ERPs) associated with response preparation, conflict monitoring and response inhibition using a cued Go-NoGo paradigm. We also studied the effect of age and emotional content of paradigm related to these ERPs. We investigated 49 individuals with ASD and 49 TD aged 12-21 years, split into two groups below (young) and above (old) 16 years of age. ASD characteristics were quantified by the Social Communication Questionnaire (SCQ) and executive functions were assessed with the Behavior Rating Inventory of Executive Function (BRIEF), both parent-rated. Behavioral performance and ERPs were recorded during a cued visual Go-NoGo task which included neutral pictures (VCPT) and pictures of emotional faces (ECPT). The amplitudes of ERPs associated with response preparation, conflict monitoring, and response inhibition were analyzed. The ASD group showed markedly higher scores than TD in both SCQ and BRIEF. Behavioral data showed no case-control differences in either the VCPT or ECPT in the whole group. While there were no significant case-control differences in ERPs from the combined VCPT and ECPT in the whole sample, the Contingent Negative Variation (CNV) was significantly enhanced in the old ASD group ( p = 0.017). When excluding ASD with comorbid ADHD we found a significantly increased N2 NoGo ( p = 0.016) and N2-effect ( p = 0.023) for the whole group. We found no case-control differences in the P3-components. Our findings suggest increased response preparation in adolescents with ASD older than 16 years and enhanced conflict monitoring in ASD without comorbid ADHD during a Go-NoGo task. The current findings may be related to Insistence of Sameness in ASD. The pathophysiological underpinnings of executive dysfunction should be further investigated to learn more about how this phenomenon is related to core characteristics of ASD.

  2. VTK-m: Accelerating the Visualization Toolkit for Massively Threaded Architectures

    DOE PAGES

    Moreland, Kenneth; Sewell, Christopher; Usher, William; ...

    2016-05-09

    Here, one of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.

  3. Boutiques: a flexible framework to integrate command-line applications in computing platforms.

    PubMed

    Glatard, Tristan; Kiar, Gregory; Aumentado-Armstrong, Tristan; Beck, Natacha; Bellec, Pierre; Bernard, Rémi; Bonnet, Axel; Brown, Shawn T; Camarasu-Pop, Sorina; Cervenansky, Frédéric; Das, Samir; Ferreira da Silva, Rafael; Flandin, Guillaume; Girard, Pascal; Gorgolewski, Krzysztof J; Guttmann, Charles R G; Hayot-Sasson, Valérie; Quirion, Pierre-Olivier; Rioux, Pierre; Rousseau, Marc-Étienne; Evans, Alan C

    2018-05-01

    We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science.

  4. VTK-m: Accelerating the Visualization Toolkit for Massively Threaded Architectures

    DOE PAGES

    Moreland, Kenneth; Sewell, Christopher; Usher, William; ...

    2016-05-09

    Execution on massively threaded processors is one of the most critical challenges for high-performance computing (HPC) scientific visualization. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Moreover, our current production scientific visualization software is not designed for these new types of architectures. In order to address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.

  5. Effectively executing a comprehensive marketing communication strategy.

    PubMed

    Gombeski, William R; Taylor, Jan; Piccirilli, Ami; Cundiff, Lee; Britt, Jason

    2007-01-01

    Marketers are under increasing scrutiny from their management to demonstrate accountability for the resources they receive. Three models are presented to help marketers execute their customer communication activities more effectively. Benefits of using the "Identification of Strategic Communication Elements," "Business Communication" and "Communications Management Process" models include (1) more effective upfront strategic and tactical planning, (2) ensuring key communication principles are addressed, (3) easier communication program communication, (4) provides a framework for program evaluation and market research and (5) increases the creative thinking marketers need when addressing the major marketing challenges. The ultimate benefit is the greater likelihood of more positive marketing results.

  6. SERENITY Aware Development of Security and Dependability Solutions

    NASA Astrophysics Data System (ADS)

    Serrano, Daniel; Maña, Antonio; Llarena, Rafael; Crespo, Beatriz Gallego-Nicasio; Li, Keqin

    This chapter presents an infrastructure supporting the implementation of Executable Components (ECs). ECs represent S&D solutions at the implementation level, that is, by means of pieces of executable code. ECs are instantiated by the Serenity runtime Framework (SRF) as a result of requests coming from applications. The development of ECs requires programmers having specific technical knowledge about SERENITY, since they need to implement certain interfaces of the ECs according to SERENITY standards. Every EC has to implement, the interface between the SRF and the EC itself, and the interface that the EC offers to applications.

  7. Improving everyday prospective memory performance in older adults: comparing cognitive process and strategy training.

    PubMed

    Brom, Sarah Susanne; Kliegel, Matthias

    2014-09-01

    Considering the importance of prospective memory for independence in old age recently, research has started to examine interventions to reduce prospective memory errors. Two general approaches can be proposed: (a) process training of executive control associated with prospective memory functioning, and/or (b) strategy training to reduce executive task demands. The present study was the first to combine and compare both training methods in a sample of 62 community-dwelling older adults (60-86 years) and to explore their effects on an ecologically valid everyday life prospective memory task (here: regular blood pressure monitoring). Even though the training of executive control was successful in enhancing the trained ability, clear transfer effects on prospective memory performance could only be found for the strategy training. However, participants with low executive abilities benefited particularly from the implementation intention strategy. Conceptually, this supports models suggesting interactions between task demands and individual differences in executive control in explaining individual differences in prospective memory performance. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  8. Protection of Mobile Agents Execution Using a Modified Self-Validating Branch-Based Software Watermarking with External Sentinel

    NASA Astrophysics Data System (ADS)

    Tomàs-Buliart, Joan; Fernández, Marcel; Soriano, Miguel

    Critical infrastructures are usually controlled by software entities. To monitor the well-function of these entities, a solution based in the use of mobile agents is proposed. Some proposals to detect modifications of mobile agents, as digital signature of code, exist but they are oriented to protect software against modification or to verify that an agent have been executed correctly. The aim of our proposal is to guarantee that the software is being executed correctly by a non trusted host. The way proposed to achieve this objective is by the improvement of the Self-Validating Branch-Based Software Watermarking by Myles et al.. The proposed modification is the incorporation of an external element called sentinel which controls branch targets. This technique applied in mobile agents can guarantee the correct operation of an agent or, at least, can detect suspicious behaviours of a malicious host during the execution of the agent instead of detecting when the execution of the agent have finished.

  9. A Social Learning Model of Adolescent Contraceptive Behavior.

    ERIC Educational Resources Information Center

    Balassone, Mary Lou

    1991-01-01

    Research findings and theories regarding adolescent contraceptive use are reviewed to propose an alternative framework relying on social learning theory. Environmental context, cognitive influences, and behavior execution constraints are suggested as the foundation for contraceptive behaviors. The behavioral skills teenagers need to use birth…

  10. The Role of Coaching in Leadership Development.

    PubMed

    Yarborough, J Preston

    2018-06-01

    Leadership coaching can be productive in maximizing a leader's development. But to make leadership coaching work effectively for students, as opposed to executives, this chapter offers guidance on key concepts and practices from the Center for Creative Leadership's Coaching Framework. © 2018 Wiley Periodicals, Inc.

  11. ODIN-object-oriented development interface for NMR.

    PubMed

    Jochimsen, Thies H; von Mengershausen, Michael

    2004-09-01

    A cross-platform development environment for nuclear magnetic resonance (NMR) experiments is presented. It allows rapid prototyping of new pulse sequences and provides a common programming interface for different system types. With this object-oriented interface implemented in C++, the programmer is capable of writing applications to control an experiment that can be executed on different measurement devices, even from different manufacturers, without the need to modify the source code. Due to the clear design of the software, new pulse sequences can be created, tested, and executed within a short time. To post-process the acquired data, an interface to well-known numerical libraries is part of the framework. This allows a transparent integration of the data processing instructions into the measurement module. The software focuses mainly on NMR imaging, but can also be used with limitations for spectroscopic experiments. To demonstrate the capabilities of the framework, results of the same experiment, carried out on two NMR imaging systems from different manufacturers are shown and compared with the results of a simulation.

  12. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  13. Achieving competitive advantage through strategic human resource management.

    PubMed

    Fottler, M D; Phillips, R L; Blair, J D; Duran, C A

    1990-01-01

    The framework presented here challenges health care executives to manage human resources strategically as an integral part of the strategic planning process. Health care executives should consciously formulate human resource strategies and practices that are linked to and reinforce the broader strategic posture of the organization. This article provides a framework for (1) determining and focusing on desired strategic outcomes, (2) identifying and implementing essential human resource management actions, and (3) maintaining or enhancing competitive advantage. The strategic approach to human resource management includes assessing the organization's environment and mission; formulating the organization's business strategy; assessing the human resources requirements based on the intended strategy; comparing the current inventory of human resources in terms of numbers, characteristics, and human resource management practices with respect to the strategic requirements of the organization and its services or product lines; formulating the human resource strategy based on the differences between the assessed requirements and the current inventory; and implementing the appropriate human resource practices to reinforce the strategy and attain competitive advantage.

  14. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.

    1996-01-01

    A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.

  15. Software attribute visualization for high integrity software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollock, G.M.

    1998-03-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.

  16. Executive Development Programs in the U.S. Air Force: Does Diversity Matter?

    DTIC Science & Technology

    2000-04-01

    PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME AND ADDRESS Air Command and Staff College Maxwell Air Force Base...AL36112 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME AND ADDRESS , 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S...leadership positions to the organization?s members who have diverse backgrounds (ethnic, religious or gender ), engendering a stronger work effort

  17. Banishing the Control Homunculi in Studies of Action Control and Behavior Change

    PubMed Central

    Verbruggen, Frederick; McLaren, Ian P. L.; Chambers, Christopher D.

    2014-01-01

    For centuries, human self-control has fascinated scientists and nonscientists alike. Current theories often attribute it to an executive control system. But even though executive control receives a great deal of attention across disciplines, most aspects of it are still poorly understood. Many theories rely on an ill-defined set of “homunculi” doing jobs like “response inhibition” or “updating” without explaining how they do so. Furthermore, it is not always appreciated that control takes place across different timescales. These two issues hamper major advances. Here we focus on the mechanistic basis for the executive control of actions. We propose that at the most basic level, action control depends on three cognitive processes: signal detection, action selection, and action execution. These processes are modulated via error-correction or outcome-evaluation mechanisms, preparation, and task rules maintained in working and long-term memory. We also consider how executive control of actions becomes automatized with practice and how people develop a control network. Finally, we discuss how the application of this unified framework in clinical domains can increase our understanding of control deficits and provide a theoretical basis for the development of novel behavioral change interventions. PMID:25419227

  18. Logical Experimental Design and Execution in the Biomedical Sciences.

    PubMed

    Holder, Daniel J; Marino, Michael J

    2017-03-17

    Lack of reproducibility has been highlighted as a significant problem in biomedical research. The present unit is devoted to describing ways to help ensure that research findings can be replicated by others, with a focus on the design and execution of laboratory experiments. Essential components for this include clearly defining the question being asked, using available information or information from pilot studies to aid in the design the experiment, and choosing manipulations under a logical framework based on Mill's "methods of knowing" to build confidence in putative causal links. Final experimental design requires systematic attention to detail, including the choice of controls, sample selection, blinding to avoid bias, and the use of power analysis to determine the sample size. Execution of the experiment is done with care to ensure that the independent variables are controlled and the measurements of the dependent variables are accurate. While there are always differences among laboratories with respect to technical expertise, equipment, and suppliers, execution of the steps itemized in this unit will ensure well-designed and well-executed experiments to answer any question in biomedical research. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  19. Developing a monitoring and evaluation framework to integrate and formalize the informal waste and recycling sector: the case of the Philippine National Framework Plan.

    PubMed

    Serrona, Kevin Roy B; Yu, Jeongsoo; Aguinaldo, Emelita; Florece, Leonardo M

    2014-09-01

    The Philippines has been making inroads in solid waste management with the enactment and implementation of the Republic Act 9003 or the Ecological Waste Management Act of 2000. Said legislation has had tremendous influence in terms of how the national and local government units confront the challenges of waste management in urban and rural areas using the reduce, reuse, recycle and recovery framework or 4Rs. One of the sectors needing assistance is the informal waste sector whose aspiration is legal recognition of their rank and integration of their waste recovery activities in mainstream waste management. To realize this, the Philippine National Solid Waste Management Commission initiated the formulation of the National Framework Plan for the Informal Waste Sector, which stipulates approaches, strategies and methodologies to concretely involve the said sector in different spheres of local waste management, such as collection, recycling and disposal. What needs to be fleshed out is the monitoring and evaluation component in order to gauge qualitative and quantitative achievements vis-a-vis the Framework Plan. In the process of providing an enabling environment for the informal waste sector, progress has to be monitored and verified qualitatively and quantitatively and measured against activities, outputs, objectives and goals. Using the Framework Plan as the reference, this article developed monitoring and evaluation indicators using the logical framework approach in project management. The primary objective is to institutionalize monitoring and evaluation, not just in informal waste sector plans, but in any waste management initiatives to ensure that envisaged goals are achieved. © The Author(s) 2014.

  20. Developing an organizing framework to guide nursing research in the Children’s Oncology Group (COG)

    PubMed Central

    Kelly, Katherine Patterson; Hooke, Mary C.; Ruccione, Kathleen; Landier, Wendy; Haase, Joan

    2014-01-01

    Objectives To describe the development and application of an organizing research framework to guide COG Nursing research. Data Sources Research articles, reports and meeting minutes Conclusion An organizing research framework helps to outline research focus and articulate the scientific knowledge being produced by nurses in the pediatric cooperative group. Implication for Nursing Practice The use of an organizing framework for COG nursing research can facilitate clinical nurses’ understanding of how children and families sustain or regain optimal health when faced with a pediatric cancer diagnosis through interventions designed to promote individual and family resilience. The Children’s Oncology Group (COG) is the sole National Cancer Institute (NCI)-supported cooperative pediatric oncology clinical trials group and the largest organization in the world devoted exclusively to pediatric cancer research. It was founded in 2000 following the merger of the four legacy NCI-supported pediatric clinical trials groups (Children’s Cancer Group [CCG], Pediatric Oncology Group [POG], National Wilms Tumor Study Group, and Intergroup Rhabdomyosarcoma Study Group). The COG currently has over 200 member institutions across North America, Australia, New Zealand and Europe and a multidisciplinary membership of over 8,000 pediatric, radiation, and surgical oncologists, nurses, clinical research associates, pharmacists, behavioral scientists, pathologists, laboratory scientists, patient/parent advocates and other pediatric cancer specialists. The COG Nursing Discipline was formed from the merger of the legacy CCG and POG Nursing Committees, and current membership exceeds 2000 registered nurses. The discipline has a well-developed infrastructure that promotes nursing involvement throughout all levels of the organization, including representation on disease, protocol, scientific, executive and other administrative committees (e.g., nominating committee, data safety monitoring boards). COG nurses facilitate delivery of protocol-based treatments for children enrolled on COG protocols, and Nursing Discipline initiatives support nursing research, professional and patient/family education, evidence-based practice, and a patient-reported outcomes resource center. The research agenda of the Nursing Discipline is enacted through a well-established nursing scholar program. PMID:24559776

Top