Sample records for dataflow execution environment

  1. Decaf: Decoupled Dataflows for In Situ High-Performance Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreher, M.; Peterka, T.

    Decaf is a dataflow system for the parallel communication of coupled tasks in an HPC workflow. The dataflow can perform arbitrary data transformations ranging from simply forwarding data to complex data redistribution. Decaf does this by allowing the user to allocate resources and execute custom code in the dataflow. All communication through the dataflow is efficient parallel message passing over MPI. The runtime for calling tasks is entirely message-driven; Decaf executes a task when all messages for the task have been received. Such a messagedriven runtime allows cyclic task dependencies in the workflow graph, for example, to enact computational steeringmore » based on the result of downstream tasks. Decaf includes a simple Python API for describing the workflow graph. This allows Decaf to stand alone as a complete workflow system, but Decaf can also be used as the dataflow layer by one or more other workflow systems to form a heterogeneous task-based computing environment. In one experiment, we couple a molecular dynamics code with a visualization tool using the FlowVR and Damaris workflow systems and Decaf for the dataflow. In another experiment, we test the coupling of a cosmology code with Voronoi tessellation and density estimation codes using MPI for the simulation, the DIY programming model for the two analysis codes, and Decaf for the dataflow. Such workflows consisting of heterogeneous software infrastructures exist because components are developed separately with different programming models and runtimes, and this is the first time that such heterogeneous coupling of diverse components was demonstrated in situ on HPC systems.« less

  2. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  3. MAX - An advanced parallel computer for space applications

    NASA Technical Reports Server (NTRS)

    Lewis, Blair F.; Bunker, Robert L.

    1991-01-01

    MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.

  4. Common spaceborne multicomputer operating system and development environment

    NASA Technical Reports Server (NTRS)

    Craymer, L. G.; Lewis, B. F.; Hayes, P. J.; Jones, R. L.

    1994-01-01

    A preliminary technical specification for a multicomputer operating system is developed. The operating system is targeted for spaceborne flight missions and provides a broad range of real-time functionality, dynamic remote code-patching capability, and system fault tolerance and long-term survivability features. Dataflow concepts are used for representing application algorithms. Functional features are included to ensure real-time predictability for a class of algorithms which require data-driven execution on an iterative steady state basis. The development environment supports the development of algorithm code, design of control parameters, performance analysis, simulation of real-time dataflow applications, and compiling and downloading of the resulting application.

  5. Building Software Agents for Planning, Monitoring, and Optimizing Travel

    DTIC Science & Technology

    2004-01-01

    defined as plans in the Theseus Agent Execution language (Barish et al. 2002). In the Web environment, sources can be quite slow and the latencies of...executor is based on a dataflow paradigm, actions are executed as soon as the data becomes available. Second, Theseus performs the actions in a...while Thesues provides an expressive language for defining information gathering and monitoring plans. The Theseus language supports capabilities

  6. A software tool for dataflow graph scheduling

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1994-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on multiple processors. The dataflow paradigm is very useful in exposing the parallelism inherent in algorithms. It provides a graphical and mathematical model which describes a partial ordering of algorithm tasks based on data precedence.

  7. Dataflow Design Tool: User's Manual

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1996-01-01

    The Dataflow Design Tool is a software tool for selecting a multiprocessor scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. The software tool implements graph-search algorithms and analysis techniques based on the dataflow paradigm. Dataflow analyses provided by the software are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool provides performance optimization through the inclusion of artificial precedence constraints among the schedulable tasks. The user interface and tool capabilities are described. Examples are provided to demonstrate the analysis, scheduling, and optimization functions facilitated by the tool.

  8. Lazy evaluation of FP programs: A data-flow approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Y.H.; Gaudiot, J.L.

    1988-12-31

    This paper presents a lazy evaluation system for the list-based functional language, Backus` FP in data-driven environment. A superset language of FP, called DFP (Demand-driven FP), is introduced. FP eager programs are transformed into DFP lazy programs which contain the notions of demands. The data-driven execution of DFP programs has the same effects of lazy evaluation. DFP lazy programs have the property of always evaluating a sufficient and necessary result. The infinite sequence generator is used to demonstrate the eager-lazy program transformation and the execution of the lazy programs.

  9. Applying Dataflow Architecture and Visualization Tools to In Vitro Pharmacology Data Automation.

    PubMed

    Pechter, David; Xu, Serena; Kurtz, Marc; Williams, Steven; Sonatore, Lisa; Villafania, Artjohn; Agrawal, Sony

    2016-12-01

    The pace and complexity of modern drug discovery places ever-increasing demands on scientists for data analysis and interpretation. Data flow programming and modern visualization tools address these demands directly. Three different requirements-one for allosteric modulator analysis, one for a specialized clotting analysis, and one for enzyme global progress curve analysis-are reviewed, and their execution in a combined data flow/visualization environment is outlined. © 2016 Society for Laboratory Automation and Screening.

  10. Task scheduling in dataflow computer architectures

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.

  11. Master-slave mixed arrays for data-flow computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, T.L.; Fisher, P.D.

    1983-01-01

    Control cells (masters) and computation cells (slaves) are mixed in regular geometric patterns to form reconfigurable arrays known as master-slave mixed arrays (MSMAS). Interconnections of the corners and edges of the hexagonal control cells and the edges of the hexagonal computation cells are used to construct synchronous and asynchronous communication networks, which support local computation and local communication. Data-driven computations result in self-directed ring pipelines within the MSMA, and composite data-flow computations are executed in a pipelined fashion. By viewing an MSMA as a computing network of tightly-linked ring pipelines, data-flow programs can be uniformly distributed over these pipelines formore » efficient resource utilisation. 9 references.« less

  12. Sequencing and fan-out mechanism for causing a set of at least two sequential instructions to be performed in a dataflow processing computer

    DOEpatents

    Grafe, Victor G.; Hoch, James E.

    1993-01-01

    A sequencing and data fanout mechanism is provided for a dataflow processor is activated by an input token which causes a sequence of operations to occur by initiating a first instruction to act on data contained within the token and then executing a sequential thread of instructions identified by either a repeat count and an offset within the token, or by an offset within each preceding instruction.

  13. Solving Partial Differential Equations in a data-driven multiprocessor environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.

    1988-12-31

    Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less

  14. Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1995-01-01

    A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.

  15. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  16. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers

    PubMed Central

    Filipovic, Nenad D.

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851

  17. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  18. SciFlo: Semantically-Enabled Grid Workflow for Collaborative Science

    NASA Astrophysics Data System (ADS)

    Yunck, T.; Wilson, B. D.; Raskin, R.; Manipon, G.

    2005-12-01

    SciFlo is a system for Scientific Knowledge Creation on the Grid using a Semantically-Enabled Dataflow Execution Environment. SciFlo leverages Simple Object Access Protocol (SOAP) Web Services and the Grid Computing standards (WS-* standards and the Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable SOAP Services, native executables, local command-line scripts, and python codes into a distributed computing flow (a graph of operators). SciFlo's XML dataflow documents can be a mixture of concrete operators (fully bound operations) and abstract template operators (late binding via semantic lookup). All data objects and operators can be both simply typed (simple and complex types in XML schema) and semantically typed using controlled vocabularies (linked to OWL ontologies such as SWEET). By exploiting ontology-enhanced search and inference, one can discover (and automatically invoke) Web Services and operators that have been semantically labeled as performing the desired transformation, and adapt a particular invocation to the proper interface (number, types, and meaning of inputs and outputs). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. A Visual Programming tool is also being developed, but it is not required. Once an analysis has been specified for a granule or day of data, it can be easily repeated with different control parameters and over months or years of data. SciFlo uses and preserves semantics, and also generates and infers new semantic annotations. Specifically, the SciFlo engine uses semantic metadata to understand (infer) what it is doing and potentially improve the data flow; preserves semantics by saving links to the semantics of (metadata describing) the input datasets, related datasets, and the data transformations (algorithms) used to generate downstream products; generates new metadata by allowing the user to add semantic annotations to the generated data products (or simply accept automatically generated provenance annotations); and infers new semantic metadata by understanding and applying logic to the semantics of the data and the transformations performed. Much ontology development still needs to be done but, nevertheless, SciFlo documents provide a substrate for using and preserving more semantics as ontologies develop. We will give a live demonstration of the growing SciFlo network using an example dataflow in which atmospheric temperature and water vapor profiles from three Earth Observing System (EOS) instruments are retrieved using SOAP (geo-location query & data access) services, co-registered, and visually & statistically compared on demand (see http://sciflo.jpl.nasa.gov for more information).

  19. Modeling and prototyping of biometric systems using dataflow programming

    NASA Astrophysics Data System (ADS)

    Minakova, N.; Petrov, I.

    2018-01-01

    The development of biometric systems is one of the labor-intensive processes. Therefore, the creation and analysis of approaches and techniques is an urgent task at present. This article presents a technique of modeling and prototyping biometric systems based on dataflow programming. The technique includes three main stages: the development of functional blocks, the creation of a dataflow graph and the generation of a prototype. A specially developed software modeling environment that implements this technique is described. As an example of the use of this technique, an example of the implementation of the iris localization subsystem is demonstrated. A variant of modification of dataflow programming is suggested to solve the problem related to the undefined order of block activation. The main advantage of the presented technique is the ability to visually display and design the model of the biometric system, the rapid creation of a working prototype and the reuse of the previously developed functional blocks.

  20. Software Epistemology

    DTIC Science & Technology

    2016-03-01

    in-vitro decision to incubate a startup, Lexumo [7], which is developing a commercial Software as a Service ( SaaS ) vulnerability assessment...LTS Label Transition System MUSE Mining and Understanding Software Enclaves RTEMS Real-Time Executive for Multi-processor Systems SaaS Software ...as a Service SSA Static Single Assignment SWE Software Epistemology UD/DU Def-Use/Use-Def Chains (Dataflow Graph)

  1. Highlights of X-Stack ExM Deliverable Swift/T

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wozniak, Justin M.

    Swift/T is a key success from the ExM: System support for extreme-scale, many-task applications1 X-Stack project, which proposed to use concurrent dataflow as an innovative programming model to exploit extreme parallelism in exascale computers. The Swift/T component of the project reimplemented the Swift language from scratch to allow applications that compose scientific modules together to be build and run on available petascale computers (Blue Gene, Cray). Swift/T does this via a new compiler and runtime that generates and executes the application as an MPI program. We assume that mission-critical emerging exascale applications will be composed as scalable applications using existingmore » software components, connected by data dependencies. Developers wrap native code fragments using a higherlevel language, then build composite applications to form a computational experiment. This exemplifies hierarchical concurrency: lower-level messaging libraries are used for fine-grained parallelism; highlevel control is used for inter-task coordination. These patterns are best expressed with dataflow, but static DAGs (i.e., other workflow languages) limit the applications that can be built; they do not provide the expressiveness of Swift, such as conditional execution, iteration, and recursive functions.« less

  2. ATAMM enhancement and multiprocessing performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.

    1994-01-01

    The algorithm to architecture mapping model (ATAAM) is a Petri net based model which provides a strategy for periodic execution of a class of real-time algorithms on multicomputer dataflow architecture. The execution of large-grained, decision-free algorithms on homogeneous processing elements is studied. The ATAAM provides an analytical basis for calculating performance bounds on throughput characteristics. Extension of the ATAMM as a strategy for cyclo-static scheduling provides for a truly distributed ATAMM multicomputer operating system. An ATAAM testbed consisting of a centralized graph manager and three processors is described using embedded firmware on 68HC11 microcontrollers.

  3. Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.

    2006-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.

  4. Macro-actor execution on multilevel data-driven architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Najjar, W.

    1988-12-31

    The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.

  5. Software Tool Integrating Data Flow Diagrams and Petri Nets

    NASA Technical Reports Server (NTRS)

    Thronesbery, Carroll; Tavana, Madjid

    2010-01-01

    Data Flow Diagram - Petri Net (DFPN) is a software tool for analyzing other software to be developed. The full name of this program reflects its design, which combines the benefit of data-flow diagrams (which are typically favored by software analysts) with the power and precision of Petri-net models, without requiring specialized Petri-net training. (A Petri net is a particular type of directed graph, a description of which would exceed the scope of this article.) DFPN assists a software analyst in drawing and specifying a data-flow diagram, then translates the diagram into a Petri net, then enables graphical tracing of execution paths through the Petri net for verification, by the end user, of the properties of the software to be developed. In comparison with prior means of verifying the properties of software to be developed, DFPN makes verification by the end user more nearly certain, thereby making it easier to identify and correct misconceptions earlier in the development process, when correction is less expensive. After the verification by the end user, DFPN generates a printable system specification in the form of descriptions of processes and data.

  6. An AES chip with DPA resistance using hardware-based random order execution

    NASA Astrophysics Data System (ADS)

    Bo, Yu; Xiangyu, Li; Cong, Chen; Yihe, Sun; Liji, Wu; Xiangmin, Zhang

    2012-06-01

    This paper presents an AES (advanced encryption standard) chip that combats differential power analysis (DPA) side-channel attack through hardware-based random order execution. Both decryption and encryption procedures of an AES are implemented on the chip. A fine-grained dataflow architecture is proposed, which dynamically exploits intrinsic byte-level independence in the algorithm. A novel circuit called an HMF (Hold-Match-Fetch) unit is proposed for random control, which randomly sets execution orders for concurrent operations. The AES chip was manufactured in SMIC 0.18 μm technology. The average energy for encrypting one group of plain texts (128 bits secrete keys) is 19 nJ. The core area is 0.43 mm2. A sophisticated experimental setup was built to test the DPA resistance. Measurement-based experimental results show that one byte of a secret key cannot be disclosed from our chip under random mode after 64000 power traces were used in the DPA attack. Compared with the corresponding fixed order execution, the hardware based random order execution is improved by at least 21 times the DPA resistance.

  7. Isomorphisms between Petri nets and dataflow graphs

    NASA Technical Reports Server (NTRS)

    Kavi, Krishna M.; Buckles, Billy P.; Bhat, U. Narayan

    1987-01-01

    Dataflow graphs are a generalized model of computation. Uninterpreted dataflow graphs with nondeterminism resolved via probabilities are shown to be isomorphic to a class of Petri nets known as free choice nets. Petri net analysis methods are readily available in the literature and this result makes those methods accessible to dataflow research. Nevertheless, combinatorial explosion can render Petri net analysis inoperative. Using a previously known technique for decomposing free choice nets into smaller components, it is demonstrated that, in principle, it is possible to determine aspects of the overall behavior from the particular behavior of components.

  8. Automating the Processing of Earth Observation Data

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wan-Lin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth science is to build a "sensor web": an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving this vision will require automation not only in the scheduling of the observations but also in the processing of the resulting data. To address this need, we are developing a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products.

  9. Modeling heterogeneous processor scheduling for real time systems

    NASA Technical Reports Server (NTRS)

    Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.

    1994-01-01

    A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.

  10. Parallel Processing with Digital Signal Processing Hardware and Software

    NASA Technical Reports Server (NTRS)

    Swenson, Cory V.

    1995-01-01

    The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.

  11. Compile-Time Schedulability Analysis of Communicating Concurrent Programs

    DTIC Science & Technology

    2006-06-28

    synchronize via the read and write operations on the FIFO channels. These operations have been implemented with the help of semaphores , which...3 1.1.2 Synchronous Dataflow . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.3 Boolean Dataflow...described by concurrent programs . . . . . . . . . 4 1.3 A synchronous dataflow model, its topology matrix, and repetition vector . 10 1.4 Select and

  12. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538

  13. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.

    PubMed

    Cieślik, Marcin; Mura, Cameron

    2011-02-25

    Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.

  14. Multiverse data-flow control.

    PubMed

    Schindler, Benjamin; Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Peikert, Ronald

    2013-06-01

    In this paper, we present a data-flow system which supports comparative analysis of time-dependent data and interactive simulation steering. The system creates data on-the-fly to allow for the exploration of different parameters and the investigation of multiple scenarios. Existing data-flow architectures provide no generic approach to handle modules that perform complex temporal processing such as particle tracing or statistical analysis over time. Moreover, there is no solution to create and manage module data, which is associated with alternative scenarios. Our solution is based on generic data-flow algorithms to automate this process, enabling elaborate data-flow procedures, such as simulation, temporal integration or data aggregation over many time steps in many worlds. To hide the complexity from the user, we extend the World Lines interaction techniques to control the novel data-flow architecture. The concept of multiple, special-purpose cursors is introduced to let users intuitively navigate through time and alternative scenarios. Users specify only what they want to see, the decision which data are required is handled automatically. The concepts are explained by taking the example of the simulation and analysis of material transport in levee-breach scenarios. To strengthen the general applicability, we demonstrate the investigation of vortices in an offline-simulated dam-break data set.

  15. Nodes on ropes: a comprehensive data and control flow for steering ensemble simulations.

    PubMed

    Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Hirsch, Christian; Schindler, Benjamin; Blöschl, Günther; Gröller, M Eduard

    2011-12-01

    Flood disasters are the most common natural risk and tremendous efforts are spent to improve their simulation and management. However, simulation-based investigation of actions that can be taken in case of flood emergencies is rarely done. This is in part due to the lack of a comprehensive framework which integrates and facilitates these efforts. In this paper, we tackle several problems which are related to steering a flood simulation. One issue is related to uncertainty. We need to account for uncertain knowledge about the environment, such as levee-breach locations. Furthermore, the steering process has to reveal how these uncertainties in the boundary conditions affect the confidence in the simulation outcome. Another important problem is that the simulation setup is often hidden in a black-box. We expose system internals and show that simulation steering can be comprehensible at the same time. This is important because the domain expert needs to be able to modify the simulation setup in order to include local knowledge and experience. In the proposed solution, users steer parameter studies through the World Lines interface to account for input uncertainties. The transport of steering information to the underlying data-flow components is handled by a novel meta-flow. The meta-flow is an extension to a standard data-flow network, comprising additional nodes and ropes to abstract parameter control. The meta-flow has a visual representation to inform the user about which control operations happen. Finally, we present the idea to use the data-flow diagram itself for visualizing steering information and simulation results. We discuss a case-study in collaboration with a domain expert who proposes different actions to protect a virtual city from imminent flooding. The key to choosing the best response strategy is the ability to compare different regions of the parameter space while retaining an understanding of what is happening inside the data-flow system. © 2011 IEEE

  16. GENESIS SciFlo: Choreographing Interoperable Web Services on the Grid using a Semantically-Enabled Dataflow Execution Environment

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.

    2007-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* & Globus Alliance toolkits), and enables scientists to do multi- instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.

  17. Parceling the Power.

    ERIC Educational Resources Information Center

    Hiatt, Blanchard; Gwynne, Peter

    1984-01-01

    To make computing power broadly available and truly friendly, both soft and hard meshing and synchronization problems will have to be solved. Possible solutions and research related to these problems are discussed. Topics considered include compilers, parallelism, networks, distributed sensors, dataflow, CEDAR system (using dataflow principles),…

  18. FX-87 performance measurements: data-flow implementation. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammel, R.T.; Gifford, D.K.

    1988-11-01

    This report documents a series of experiments performed to explore the thesis that the FX-87 effect system permits a compiler to schedule imperative programs (i.e., programs that may contain side-effects) for execution on a parallel computer. The authors analyze how much the FX-87 static effect system can improve the execution times of five benchmark programs on a parallel graph interpreter. Three of their benchmark programs do not use side-effects (factorial, fibonacci, and polynomial division) and thus did not have any effect-induced constraints. Their FX-87 performance was comparable to their performance in a purely functional language. Two of the benchmark programsmore » use side effects (DNA sequence matching and Scheme interpretation) and the compiler was able to use effect information to reduce their execution times by factors of 1.7 to 5.4 when compared with sequential execution times. These results support the thesis that a static effect system is a powerful tool for compilation to multiprocessor computers. However, the graph interpreter we used was based on unrealistic assumptions, and thus our results may not accurately reflect the performance of a practical FX-87 implementation. The results also suggest that conventional loop analysis would complement the FX-87 effect system« less

  19. Applying a visual language for image processing as a graphical teaching tool in medical imaging

    NASA Astrophysics Data System (ADS)

    Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.

  20. Performance analysis of a large-grain dataflow scheduling paradigm

    NASA Technical Reports Server (NTRS)

    Young, Steven D.; Wills, Robert W.

    1993-01-01

    A paradigm for scheduling computations on a network of multiprocessors using large-grain data flow scheduling at run time is described and analyzed. The computations to be scheduled must follow a static flow graph, while the schedule itself will be dynamic (i.e., determined at run time). Many applications characterized by static flow exist, and they include real-time control and digital signal processing. With the advent of computer-aided software engineering (CASE) tools for capturing software designs in dataflow-like structures, macro-dataflow scheduling becomes increasingly attractive, if not necessary. For parallel implementations, using the macro-dataflow method allows the scheduling to be insulated from the application designer and enables the maximum utilization of available resources. Further, by allowing multitasking, processor utilizations can approach 100 percent while they maintain maximum speedup. Extensive simulation studies are performed on 4-, 8-, and 16-processor architectures that reflect the effects of communication delays, scheduling delays, algorithm class, and multitasking on performance and speedup gains.

  1. Dataflow Architectures.

    DTIC Science & Technology

    1986-02-12

    of Electrical Engineering and Computer Science. MIT, Cambridge, MA,June 1983. 33. Hiraki , K.. K. Nishida and T. Shimada. "Evaluation of Associative...J. R. Gurd. "A Practical Dataflow Computer". Computer 15,2 (February 1982), 51-57. 50. Yuba, T., T. Shimada, K. Hiraki , and H. Kashiwagi. Sigma-i: A

  2. Managing Parallelism and Resources in Scientific Dataflow Programs

    DTIC Science & Technology

    1990-03-01

    1983. [52] K. Hiraki , K. Nishida, S. Sekiguchi, and T. Shimada. Maintainence architecture and its LSI implementation of a dataflow computer with a... Hiraki , and K. Nishida. An architecture of a data flow machine and its evaluation. In Proceedings of CompCon 84, pages 486-490. IEEE, 1984. [84] N

  3. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    NASA Astrophysics Data System (ADS)

    Vandelli, Wainer; ATLAS TDAQ Collaboration

    2010-04-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  4. Dataflow models for fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Papadopoulos, G. M.

    1984-01-01

    Dataflow concepts are used to generate a unified hardware/software model of redundant physical systems which are prone to faults. Basic results in input congruence and synchronization are shown to reduce to a simple model of data exchanges between processing sites. Procedures are given for the construction of congruence schemata, the distinguishing features of any correctly designed redundant system.

  5. Algorithm Optimally Orders Forward-Chaining Inference Rules

    NASA Technical Reports Server (NTRS)

    James, Mark

    2008-01-01

    People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.

  6. PolyCheck: Dynamic Verification of Iteration Space Transformations on Affine Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Wenlei; Krishnamoorthy, Sriram; Pouchet, Louis-noel

    2016-01-11

    High-level compiler transformations, especially loop transformations, are widely recognized as critical optimizations to restructure programs to improve data locality and expose parallelism. Guaranteeing the correctness of program transformations is essential, and to date three main approaches have been developed: proof of equivalence of affine programs, matching the execution traces of programs, and checking bit-by-bit equivalence of the outputs of the programs. Each technique suffers from limitations in either the kind of transformations supported, space complexity, or the sensitivity to the testing dataset. In this paper, we take a novel approach addressing all three limitations to provide an automatic bug checkermore » to verify any iteration reordering transformations on affine programs, including non-affine transformations, with space consumption proportional to the original program data, and robust to arbitrary datasets of a given size. We achieve this by exploiting the structure of affine program control- and data-flow to generate at compile-time lightweight checker code to be executed within the transformed program. Experimental results assess the correctness and effectiveness of our method, and its increased coverage over previous approaches.« less

  7. OpenROCS: a software tool to control robotic observatories

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Sanz, Josep; Vilardell, Francesc; Ribas, Ignasi; Gil, Pere

    2012-09-01

    We present the Open Robotic Observatory Control System (OpenROCS), an open source software platform developed for the robotic control of telescopes. It acts as a software infrastructure that executes all the necessary processes to implement responses to the system events that appear in the routine and non-routine operations associated to data-flow and housekeeping control. The OpenROCS software design and implementation provides a high flexibility to be adapted to different observatory configurations and event-action specifications. It is based on an abstract model that is independent of the specific hardware or software and is highly configurable. Interfaces to the system components are defined in a simple manner to achieve this goal. We give a detailed description of the version 2.0 of this software, based on a modular architecture developed in PHP and XML configuration files, and using standard communication protocols to interface with applications for hardware monitoring and control, environment monitoring, scheduling of tasks, image processing and data quality control. We provide two examples of how it is used as the core element of the control system in two robotic observatories: the Joan Oró Telescope at the Montsec Astronomical Observatory (Catalonia, Spain) and the SuperWASP Qatar Telescope at the Roque de los Muchachos Observatory (Canary Islands, Spain).

  8. Another expert system rule inference based on DNA molecule logic gates

    NASA Astrophysics Data System (ADS)

    WÄ siewicz, Piotr

    2013-10-01

    With the help of silicon industry microfluidic processors were invented utilizing nano membrane valves, pumps and microreactors. These so called lab-on-a-chips combined together with molecular computing create molecular-systems-ona- chips. This work presents a new approach to implementation of molecular inference systems. It requires the unique representation of signals by DNA molecules. The main part of this work includes the concept of logic gates based on typical genetic engineering reactions. The presented method allows for constructing logic gates with many inputs and for executing them at the same quantity of elementary operations, regardless of a number of input signals. Every microreactor of the lab-on-a-chip performs one unique operation on input molecules and can be connected by dataflow output-input connections to other ones.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, W.

    Building something which could be called {open_quotes}virtual reality{close_quotes} (VR) is something of a challenge, particularly when nobody really seems to agree on a definition of VR. The author wanted to combine scientific visualization with VR, resulting in an environment useful for assisting scientific research. He demonstrates the combination of VR and scientific visualization in a prototype application. The VR application constructed consists of a dataflow based system for performing scientific visualization (AVS), extensions to the system to support VR input devices and a numerical simulation ported into the dataflow environment. The VR system includes two inexpensive, off-the-shelf VR devices andmore » some custom code. A working system was assembled with about two man-months of effort. The system allows the user to specify parameters for a chemical flooding simulation as well as some viewing parameters using VR input devices, as well as view the output using VR output devices. In chemical flooding, there is a subsurface region that contains chemicals which are to be removed. Secondary oil recovery and environmental remediation are typical applications of chemical flooding. The process assumes one or more injection wells, and one or more production wells. Chemicals or water are pumped into the ground, mobilizing and displacing hydrocarbons or contaminants. The placement of the production and injection wells, and other parameters of the wells, are the most important variables in the simulation.« less

  10. Region Templates: Data Representation and Management for High-Throughput Image Analysis

    PubMed Central

    Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Klasky, Scott; Saltz, Joel

    2015-01-01

    We introduce a region template abstraction and framework for the efficient storage, management and processing of common data types in analysis of large datasets of high resolution images on clusters of hybrid computing nodes. The region template abstraction provides a generic container template for common data structures, such as points, arrays, regions, and object sets, within a spatial and temporal bounding box. It allows for different data management strategies and I/O implementations, while providing a homogeneous, unified interface to applications for data storage and retrieval. A region template application is represented as a hierarchical dataflow in which each computing stage may be represented as another dataflow of finer-grain tasks. The execution of the application is coordinated by a runtime system that implements optimizations for hybrid machines, including performance-aware scheduling for maximizing the utilization of computing devices and techniques to reduce the impact of data transfers between CPUs and GPUs. An experimental evaluation on a state-of-the-art hybrid cluster using a microscopy imaging application shows that the abstraction adds negligible overhead (about 3%) and achieves good scalability and high data transfer rates. Optimizations in a high speed disk based storage implementation of the abstraction to support asynchronous data transfers and computation result in an application performance gain of about 1.13×. Finally, a processing rate of 11,730 4K×4K tiles per minute was achieved for the microscopy imaging application on a cluster with 100 nodes (300 GPUs and 1,200 CPU cores). This computation rate enables studies with very large datasets. PMID:26139953

  11. Development of visual programming techniques to integrate theoretical modeling into the scientific planning and instrument operations environment of ISTP

    NASA Technical Reports Server (NTRS)

    Goodrich, Charles C.

    1993-01-01

    The goal of this project is to investigate the use of visualization software based on the visual programming and data-flow paradigms to meet the needs of the SPOF and through it the International Solar Terrestrial Physics (ISTP) science community. Specific needs we address include science planning, data interpretation, comparisons of data with simulation and model results, and data acquisition. Our accomplishments during the twelve month grant period are discussed below.

  12. A Fault Oblivious Extreme-Scale Execution Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKie, Jim

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massivemore » data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations employing work stealing for load balancing that scaled to the largest existing supercomputers. Finally, we implemented the Elastic Building Blocks runtime, a library to manage object-oriented distributed software components. To support the research, we won two INCITE awards for time on Intrepid (BG/P) and Mira (BG/Q). Much of our work has had impact in the OS and runtime community through the ASCR Exascale OS/R workshop and report, leading to the research agenda of the Exascale OS/R program. Our project was, however, also affected by attrition of multiple PIs. While the PIs continued to participate and offer guidance as time permitted, losing these key individuals was unfortunate both for the project and for the DOE HPC community.« less

  13. Exploiting loop level parallelism in nonprocedural dataflow programs

    NASA Technical Reports Server (NTRS)

    Gokhale, Maya B.

    1987-01-01

    Discussed are how loop level parallelism is detected in a nonprocedural dataflow program, and how a procedural program with concurrent loops is scheduled. Also discussed is a program restructuring technique which may be applied to recursive equations so that concurrent loops may be generated for a seemingly iterative computation. A compiler which generates C code for the language described below has been implemented. The scheduling component of the compiler and the restructuring transformation are described.

  14. Rapid Prototyping of High Performance Signal Processing Applications

    NASA Astrophysics Data System (ADS)

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high-level application specification consisting of topological patterns in various aspects of the design flow. 2. We have formulated the core functional dataflow (CFDF) model of computation, which can be used to model a wide variety of deterministic dynamic dataflow behaviors. We have also presented key features of the CFDF model and tools based on these features. These tools provide support for heterogeneous dataflow behaviors, an intuitive and common framework for functional specification, support for functional simulation, portability from several existing dataflow models to CFDF, integrated emphasis on minimally-restricted specification of actor functionality, and support for efficient static, quasi-static, and dynamic scheduling techniques. 3. We have developed a generalized scheduling technique for CFDF graphs based on decomposition of a CFDF graph into static graphs that interact at run-time. Furthermore, we have refined this generalized scheduling technique using a new notion of "mode grouping," which better exposes the underlying static behavior. We have also developed a scheduling technique for a class of dynamic applications that generates parameterized looped schedules (PLSs), which can handle dynamic dataflow behavior without major limitations on compile-time predictability. 4. We have demonstrated the use of dataflow-based approaches for design and implementation of radio astronomy DSP systems using an application example of a tunable digital downconverter (TDD) for spectrometers. Design and implementation of this module has been an integral part of this thesis work. This thesis demonstrates a design flow that consists of a high-level software prototype, analysis, and simulation using the dataflow interchange format (DIF) tool, and integration of this design with the existing tool flow for the target implementation on an FPGA platform, called interconnect break-out board (IBOB). We have also explored the trade-off between low hardware cost for fixed configurations of digital downconverters and flexibility offered by TDD designs. 5. This thesis has contributed significantly to the development and release of the latest version of a graph package oriented toward models of computation (MoCGraph). Our enhancements to this package include support for tree data structures, and generalized schedule trees (GSTs), which provide a useful data structure for a wide variety of schedule representations. Our extensions to the MoCGraph package provided key support for the CFDF model, and functional simulation capabilities in the DIF package.

  15. Automated Data Processing as an AI Planning Problem

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wanlin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth Science is to build a "sensor web"; an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving his vision will require automation not only in the scheduling of the observations but also in the processing af tee resulting data. Ta address this need, we have developed a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products. Data processing domains are substantially different from other planning domains that have been explored, and this has led us to substantially different choices in terms of representation and algorithms. We discuss some of these differences and discuss the approach we have adopted.

  16. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    NASA Astrophysics Data System (ADS)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  17. A Metadata Action Language

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Clancy, Dan (Technical Monitor)

    2001-01-01

    The data management problem comprises data processing and data tracking. Data processing is the creation of new data based on existing data sources. Data tracking consists of storing metadata descriptions of available data. This paper addresses the data management problem by casting it as an AI planning problem. Actions are data-processing commands, plans are dataflow programs and goals are metadata descriptions of desired data products. Data manipulation is simply plan generation and execution, and a key component of data tracking is inferring the effects of an observed plan. We introduce a new action language for data management domains, called ADILM. We discuss the connection between data processing and information integration and show how a language for the latter must be modified to support the former. The paper also discusses information gathering within a data-processing framework, and show how ADILM metadata expressions are a generalization of Local Completeness.

  18. A logical model of cooperating rule-based systems

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.

    1989-01-01

    A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.

  19. Prototyping scalable digital signal processing systems for radio astronomy using dataflow models

    NASA Astrophysics Data System (ADS)

    Sane, N.; Ford, J.; Harris, A. I.; Bhattacharyya, S. S.

    2012-05-01

    There is a growing trend toward using high-level tools for design and implementation of radio astronomy digital signal processing (DSP) systems. Such tools, for example, those from the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER), are usually platform-specific, and lack high-level, platform-independent, portable, scalable application specifications. This limits the designer's ability to experiment with designs at a high-level of abstraction and early in the development cycle. We address some of these issues using a model-based design approach employing dataflow models. We demonstrate this approach by applying it to the design of a tunable digital downconverter (TDD) used for narrow-bandwidth spectroscopy. Our design is targeted toward an FPGA platform, called the Interconnect Break-out Board (IBOB), that is available from the CASPER. We use the term TDD to refer to a digital downconverter for which the decimation factor and center frequency can be reconfigured without the need for regenerating the hardware code. Such a design is currently not available in the CASPER DSP library. The work presented in this paper focuses on two aspects. First, we introduce and demonstrate a dataflow-based design approach using the dataflow interchange format (DIF) tool for high-level application specification, and we integrate this approach with the CASPER tool flow. Secondly, we explore the trade-off between the flexibility of TDD designs and the low hardware cost of fixed-configuration digital downconverter (FDD) designs that use the available CASPER DSP library. We further explore this trade-off in the context of a two-stage downconversion scheme employing a combination of TDD or FDD designs.

  20. From Provenance Standards and Tools to Queries and Actionable Provenance

    NASA Astrophysics Data System (ADS)

    Ludaescher, B.

    2017-12-01

    The W3C PROV standard provides a minimal core for sharing retrospective provenance information for scientific workflows and scripts. PROV extensions such as DataONE's ProvONE model are necessary for linking runtime observables in retrospective provenance records with conceptual-level prospective provenance information, i.e., workflow (or dataflow) graphs. Runtime provenance recorders, such as DataONE's RunManager for R, or noWorkflow for Python capture retrospective provenance automatically. YesWorkflow (YW) is a toolkit that allows researchers to declare high-level prospective provenance models of scripts via simple inline comments (YW-annotations), revealing the computational modules and dataflow dependencies in the script. By combining and linking both forms of provenance, important queries and use cases can be supported that neither provenance model can afford on its own. We present existing and emerging provenance tools developed for the DataONE and SKOPE (Synthesizing Knowledge of Past Environments) projects. We show how the different tools can be used individually and in combination to model, capture, share, query, and visualize provenance information. We also present challenges and opportunities for making provenance information more immediately actionable for the researchers who create it in the first place. We argue that such a shift towards "provenance-for-self" is necessary to accelerate the creation, sharing, and use of provenance in support of transparent, reproducible computational and data science.

  1. Proceedings: Sisal `93

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J.T.

    1993-10-01

    This report contain papers on: Programmability and performance issues; The case of an iterative partial differential equation solver; Implementing the kernal of the Australian Region Weather Prediction Model in Sisal; Even and quarter-even prime length symmetric FFTs and their Sisal Implementations; Top-down thread generation for Sisal; Overlapping communications and computations on NUMA architechtures; Compiling technique based on dataflow analysis for funtional programming language Valid; Copy elimination for true multidimensional arrays in Sisal 2.0; Increasing parallelism for an optimization that reduces copying in IF2 graphs; Caching in on Sisal; Cache performance of Sisal Vs. FORTRAN; FFT algorithms on a shared-memory multiprocessor;more » A parallel implementation of nonnumeric search problems in Sisal; Computer vision algorithms in Sisal; Compilation of Sisal for a high-performance data driven vector processor; Sisal on distributed memory machines; A virtual shared addressing system for distributed memory Sisal; Developing a high-performance FFT algorithm in Sisal for a vector supercomputer; Implementation issues for IF2 on a static data-flow architechture; and Systematic control of parallelism in array-based data-flow computation. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less

  2. DI: An interactive debugging interpreter for applicative languages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skedzielewski, S.K.; Yates, R.K.; Oldehoeft, R.R.

    1987-03-12

    The DI interpreter is both a debugger and interpreter of SISLAL programs. Its use as a program interpreter is only a small part of its role; it is designed to be a tool for studying compilation techniques for applicative languages. DI interprets dataflow graphs expressed in the IF1 and IF2 languages, and is heavily instrumented to report the activity of dynamic storage activity, reference counting, copying and updating of structured data values. It also aids the SISAL language evaluation by providing an interim execution vehicle for SISAL programs. DI provides determinate, sequential interpretation of graph nodes for sequential and parallelmore » operations in a canonical order. As a debugging aid, DI allows tracing, breakpointing, and interactive display of program data values. DI handles creation of SISAL and IF1 error values for each data type and propagates them according to a well-defined algebra. We have begun to implement IF1 optimizers and have measured the improvements with DI.« less

  3. A Web Based Collaborative Design Environment for Spacecraft

    NASA Technical Reports Server (NTRS)

    Dunphy, Julia

    1998-01-01

    In this era of shrinking federal budgets in the USA we need to dramatically improve our efficiency in the spacecraft engineering design process. We have come up with a method which captures much of the experts' expertise in a dataflow design graph: Seamlessly connectable set of local and remote design tools; Seamlessly connectable web based design tools; and Web browser interface to the developing spacecraft design. We have recently completed our first web browser interface and demonstrated its utility in the design of an aeroshell using design tools located at web sites at three NASA facilities. Multiple design engineers and managers are now able to interrogate the design engine simultaneously and find out what the design looks like at any point in the design cycle, what its parameters are, and how it reacts to adverse space environments.

  4. PLAStiCC: Predictive Look-Ahead Scheduling for Continuous dataflows on Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor K.

    2014-05-27

    Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application’s throughput. In this paper wemore » propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based look-ahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from public and private IaaS clouds. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.« less

  5. Compiler analysis for irregular problems in FORTRAN D

    NASA Technical Reports Server (NTRS)

    Vonhanxleden, Reinhard; Kennedy, Ken; Koelbel, Charles; Das, Raja; Saltz, Joel

    1992-01-01

    We developed a dataflow framework which provides a basis for rigorously defining strategies to make use of runtime preprocessing methods for distributed memory multiprocessors. In many programs, several loops access the same off-processor memory locations. Our runtime support gives us a mechanism for tracking and reusing copies of off-processor data. A key aspect of our compiler analysis strategy is to determine when it is safe to reuse copies of off-processor data. Another crucial function of the compiler analysis is to identify situations which allow runtime preprocessing overheads to be amortized. This dataflow analysis will make it possible to effectively use the results of interprocedural analysis in our efforts to reduce interprocessor communication and the need for runtime preprocessing.

  6. Thread safe astronomy

    NASA Astrophysics Data System (ADS)

    Seaman, R.

    2008-03-01

    Observational astronomy is the beneficiary of an ancient chain of apprenticeship. Kepler's laws required Tycho's data. As the pace of discoveries has increased over the centuries, so has the cadence of tutelage (literally, "watching over"). Naked eye astronomy is thousands of years old, the telescope hundreds, digital imaging a few decades, but today's undergraduates will use instrumentation yet unbuilt - and thus, unfamiliar to their professors - to complete their doctoral dissertations. Not only has the quickening cadence of astronomical data-taking overrun the apprehension of the science within, but the contingent pace of experimental design threatens our capacity to learn new techniques and apply them productively. Virtual technologies are necessary to accelerate our human processes of perception and comprehension to keep up with astronomical instrumentation and pipelined dataflows. Necessary, but not sufficient. Computers can confuse us as efficiently as they illuminate. Rather, as with neural pathways evolved to meet competitive ecological challenges, astronomical software and data must become organized into ever more coherent `threads' of execution. These are the same threaded constructs as understood by computer science. No datum is an island.

  7. System Definition Document

    DOT National Transportation Integrated Search

    1996-06-12

    The Gary-Chicago-Milwaukee (GCM) Corridor Transportation Information Center : (C-TIC) System Definition Document describes the C-TIC concept and defines the : high level processes and dataflows. The Requirements Specification together : with the Inte...

  8. Information-Systems Data-Flow Diagram

    NASA Technical Reports Server (NTRS)

    Blosiu, J. O.

    1983-01-01

    Single form presents clear picture of entire system. Form giving relational review of data flow well suited to information system planning, analysis, engineering, and management. Used to review data flow for developing system or one already in use.

  9. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    PubMed

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  10. Designing Class Methods from Dataflow Diagrams

    NASA Astrophysics Data System (ADS)

    Shoval, Peretz; Kabeli-Shani, Judith

    A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.

  11. Study of Thread Level Parallelism in a Video Encoding Application for Chip Multiprocessor Design

    NASA Astrophysics Data System (ADS)

    Debes, Eric; Kaine, Greg

    2002-11-01

    In media applications there is a high level of available thread level parallelism (TLP). In this paper we study the intra TLP in a video encoder. We show that a well-distributed highly optimized encoder running on a symmetric multiprocessor (SMP) system can run 3.2 faster on a 4-way SMP machine than on a single processor. The multithreaded encoder running on an SMP system is then used to understand the requirements of a chip multiprocessor (CMP) architecture, which is one possible architectural direction to better exploit TLP. In the framework of this study, we use a software approach to evaluate the dataflow between processors for the video encoder running on an SMP system. An estimation of the dataflow is done with L2 cache miss event counters using Intel® VTuneTM performance analyzer. The experimental measurements are compared to theoretical results.

  12. Latency in Distributed Acquisition and Rendering for Telepresence Systems.

    PubMed

    Ohl, Stephan; Willert, Malte; Staadt, Oliver

    2015-12-01

    Telepresence systems use 3D techniques to create a more natural human-centered communication over long distances. This work concentrates on the analysis of latency in telepresence systems where acquisition and rendering are distributed. Keeping latency low is important to immerse users in the virtual environment. To better understand latency problems and to identify the source of such latency, we focus on the decomposition of system latency into sub-latencies. We contribute a model of latency and show how it can be used to estimate latencies in a complex telepresence dataflow network. To compare the estimates with real latencies in our prototype, we modify two common latency measurement methods. This presented methodology enables the developer to optimize the design, find implementation issues and gain deeper knowledge about specific sources of latency.

  13. MOOSE: A parallel computational framework for coupled systems of nonlinear equations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derek Gaston; Chris Newman; Glen Hansen

    Systems of coupled, nonlinear partial differential equations (PDEs) often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at the solution of such systems, is presented. As opposed to traditional data-flow oriented computational frameworks, MOOSE is instead founded on the mathematical principle of Jacobian-free Newton-Krylov (JFNK) solution methods. Utilizing the mathematical structure present in JFNK, physics expressions are modularized into `Kernels,'' allowing for rapid production of new simulation tools. In addition, systems are solved implicitly and fully coupled, employing physics based preconditioning, which provides great flexibility even with large variance in timemore » scales. A summary of the mathematics, an overview of the structure of MOOSE, and several representative solutions from applications built on the framework are presented.« less

  14. An Execution Service for Grid Computing

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Hu, Chaumin

    2004-01-01

    This paper describes the design and implementation of the IPG Execution Service that reliably executes complex jobs on a computational grid. Our Execution Service is part of the IPG service architecture whose goal is to support location-independent computing. In such an environment, once n user ports an npplicntion to one or more hardware/software platfrms, the user can describe this environment to the grid the grid can locate instances of this platfrm, configure the platfrm as required for the application, and then execute the application. Our Execution Service runs jobs that set up such environments for applications and executes them. These jobs consist of a set of tasks for executing applications and managing data. The tasks have user-defined starting conditions that allow users to specih complex dependencies including task to execute when tasks fail, afiequent occurrence in a large distributed system, or are cancelled. The execution task provided by our service also configures the application environment exactly as specified by the user and captures the exit code of the application, features that many grid execution services do not support due to dflculties interfacing to local scheduling systems.

  15. Vulnerability detection using data-flow graphs and SMT solvers

    DTIC Science & Technology

    2016-10-31

    concerns. The framework is modular and pipelined to allow scalable analysis on distributed systems. Our vulnerability detection framework employs machine...Design We designed the framework to be modular to enable flexible reuse and extendibility. In its current form, our framework performs the following

  16. A High-Speed Design of Montgomery Multiplier

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Ikenaga, Takeshi; Goto, Satoshi

    With the increase of key length used in public cryptographic algorithms such as RSA and ECC, the speed of Montgomery multiplication becomes a bottleneck. This paper proposes a high speed design of Montgomery multiplier. Firstly, a modified scalable high-radix Montgomery algorithm is proposed to reduce critical path. Secondly, a high-radix clock-saving dataflow is proposed to support high-radix operation and one clock cycle delay in dataflow. Finally, a hardware-reused architecture is proposed to reduce the hardware cost and a parallel radix-16 design of data path is proposed to accelerate the speed. By using HHNEC 0.25μm standard cell library, the implementation results show that the total cost of Montgomery multiplier is 130 KGates, the clock frequency is 180MHz and the throughput of 1024-bit RSA encryption is 352kbps. This design is suitable to be used in high speed RSA or ECC encryption/decryption. As a scalable design, it supports any key-length encryption/decryption up to the size of on-chip memory.

  17. Malware detection and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Ken; Lloyd, Levi; Crussell, Jonathan

    Embodiments of the invention describe systems and methods for malicious software detection and analysis. A binary executable comprising obfuscated malware on a host device may be received, and incident data indicating a time when the binary executable was received and identifying processes operating on the host device may be recorded. The binary executable is analyzed via a scalable plurality of execution environments, including one or more non-virtual execution environments and one or more virtual execution environments, to generate runtime data and deobfuscation data attributable to the binary executable. At least some of the runtime data and deobfuscation data attributable tomore » the binary executable is stored in a shared database, while at least some of the incident data is stored in a private, non-shared database.« less

  18. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  19. A Nonlinear Model for Interactive Data Analysis and Visualization and an Implementation Using Progressive Computation for Massive Remote Climate Data Ensembles

    NASA Astrophysics Data System (ADS)

    Christensen, C.; Liu, S.; Scorzelli, G.; Lee, J. W.; Bremer, P. T.; Summa, B.; Pascucci, V.

    2017-12-01

    The creation, distribution, analysis, and visualization of large spatiotemporal datasets is a growing challenge for the study of climate and weather phenomena in which increasingly massive domains are utilized to resolve finer features, resulting in datasets that are simply too large to be effectively shared. Existing workflows typically consist of pipelines of independent processes that preclude many possible optimizations. As data sizes increase, these pipelines are difficult or impossible to execute interactively and instead simply run as large offline batch processes. Rather than limiting our conceptualization of such systems to pipelines (or dataflows), we propose a new model for interactive data analysis and visualization systems in which we comprehensively consider the processes involved from data inception through analysis and visualization in order to describe systems composed of these processes in a manner that facilitates interactive implementations of the entire system rather than of only a particular component. We demonstrate the application of this new model with the implementation of an interactive system that supports progressive execution of arbitrary user scripts for the analysis and visualization of massive, disparately located climate data ensembles. It is currently in operation as part of the Earth System Grid Federation server running at Lawrence Livermore National Lab, and accessible through both web-based and desktop clients. Our system facilitates interactive analysis and visualization of massive remote datasets up to petabytes in size, such as the 3.5 PB 7km NASA GEOS-5 Nature Run simulation, previously only possible offline or at reduced resolution. To support the community, we have enabled general distribution of our application using public frameworks including Docker and Anaconda.

  20. Dataflow Computation for the J-Machine

    DTIC Science & Technology

    1990-06-01

    MOVE 8. 1 CALL ClrTVCTO1 ;((:LkBEL (:LITERAL (:SYIBOL : BBD -IF-4)))) ZIDIF.4: ROVE [1,133, 3.3 ROV 13. A2 ((:TERIXATM)) SUSPEND ;((:LAEL (:LITBUAL...deftostant syn 0) (detconstant int-tag ’int) (detconatant Int 1) (detconstant id-tag ’ td ) (defconstant td 9) (Aotconstaut boolean-tag lbool

  1. The Many Ways Data Must Flow.

    ERIC Educational Resources Information Center

    La Brecque, Mort

    1984-01-01

    To break the bottleneck inherent in today's linear computer architectures, parallel schemes (which allow computers to perform multiple tasks at one time) are being devised. Several of these schemes are described. Dataflow devices, parallel number-crunchers, programing languages, and a device based on a neurological model are among the areas…

  2. Turtle Graphics Implementation Using a Graphical Dataflow Programming Approach

    DTIC Science & Technology

    1992-09-01

    this research. The intent of this section is not to teach how to program in LOGO, with the use of Turtle Graphics, but simply to provide an... how to program in Prograph, but only to provide a basic understanding the Prograph language, and its programming envi- ronment. Several examples are

  3. Information and Networking Technologies in Russian Libraries. UDT Occasional Paper #1.

    ERIC Educational Resources Information Center

    International Federation of Library Associations and Institutions, Ottawa (Ontario). International Office for Universal Dataflow & Telecommunications.

    The Universal Dataflow and Telecommunications (UDT) Occasional Papers distribute information on the use of networking, information technology and telecommunications by and of interest to the international library community. This occasional paper is comprised of three papers related to technologies in Russian libraries: (1) "The First Russian…

  4. Dataflow Integration and Simulation Techniques for DSP System Design Tools

    DTIC Science & Technology

    2007-01-01

    Lebak, M. Richards , and D. Campbell, “VSIPL: An object-based open standard API for vector, signal, and image processing,” in Proceedings of the...Inc., document Version 0.98a. [56] P. Marwedel and G. Goossens , Eds., Code Generation for Embedded Processors. Kluwer Academic Publishers, 1995. [57

  5. AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading

    NASA Astrophysics Data System (ADS)

    Leggett, Charles; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; van Gemmeren, Peter; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; ATLAS Collaboration

    2017-10-01

    ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying handling of features such as event and time dependent data, asynchronous callbacks, metadata, integration with the online High Level Trigger for partial processing in certain regions of interest, concurrent I/O, as well as ensuring thread safety of core services. We also report on upgrading the framework to handle Algorithms that are fully re-entrant.

  6. 3D medical volume reconstruction using web services.

    PubMed

    Kooper, Rob; Shirk, Andrew; Lee, Sang-Chul; Lin, Amy; Folberg, Robert; Bajcsy, Peter

    2008-04-01

    We address the problem of 3D medical volume reconstruction using web services. The use of proposed web services is motivated by the fact that the problem of 3D medical volume reconstruction requires significant computer resources and human expertise in medical and computer science areas. Web services are implemented as an additional layer to a dataflow framework called data to knowledge. In the collaboration between UIC and NCSA, pre-processed input images at NCSA are made accessible to medical collaborators for registration. Every time UIC medical collaborators inspected images and selected corresponding features for registration, the web service at NCSA is contacted and the registration processing query is executed using the image to knowledge library of registration methods. Co-registered frames are returned for verification by medical collaborators in a new window. In this paper, we present 3D volume reconstruction problem requirements and the architecture of the developed prototype system at http://isda.ncsa.uiuc.edu/MedVolume. We also explain the tradeoffs of our system design and provide experimental data to support our system implementation. The prototype system has been used for multiple 3D volume reconstructions of blood vessels and vasculogenic mimicry patterns in histological sections of uveal melanoma studied by fluorescent confocal laser scanning microscope.

  7. Family Environment and Parent-Child Relationships as Related to Executive Functioning in Children

    ERIC Educational Resources Information Center

    Schroeder, Valarie M.; Kelley, Michelle L.

    2010-01-01

    The present study examines the associations between family environment, parenting practices and executive functions in normally developing children. One hundred parents of children between the ages of 5 and 12 completed the Behavior Rating Inventory of Executive Functions from the Family Environment Scale and the Parent-Child Relationship…

  8. A Programmer’s Assistant for a Special-Purpose Dataflow Language.

    DTIC Science & Technology

    1985-12-01

    valueclasscheck ’strict)) load-qda-kbs Loads the 6DA knowledge bases (defun Ioad-qda-kbs 0) Idolist (kb foda -kbst) (kbload (strino-append ’host-dir...DeMarco, T., "Structured Analysis and System Specification," GUIDE 47 Proceedings, 1978. Reprinted in Classics in Software Engineering, edited by Edward

  9. Coupling Visualization, Simulation, and Deep Learning for Ensemble Steering of Complex Energy Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, Kristin C; Brunhart-Lupo, Nicholas J; Bush, Brian W

    We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically soundmore » esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.« less

  10. 40 CFR 68.155 - Executive summary.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Executive summary. 68.155 Section 68.155 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Risk Management Plan § 68.155 Executive summary. The owner or...

  11. 40 CFR 68.155 - Executive summary.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Executive summary. 68.155 Section 68.155 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Risk Management Plan § 68.155 Executive summary. The owner or...

  12. Maternal Executive Function, Harsh Parenting, and Child Conduct Problems

    PubMed Central

    Deater-Deckard, Kirby; Wang, Zhe; Chen, Nan; Bell, Martha Ann

    2012-01-01

    Background Maternal executive function and household regulation both are critical aspects of optimal childrearing, but their interplay is not understood. We tested the hypotheses that 1) the link between challenging child conduct problems and harsh parenting would be strongest for mothers with poorer executive function and weakest among those with better executive function, and 2) this mechanism would be further moderated by the degree of household chaos. Methods The socioeconomically diverse sample included 147 mothers of 3-to-7 year old children. Mothers completed questionnaires and a laboratory assessment of executive function. Results Consistent with hypotheses, harsh parenting was linked with child conduct problems only among mothers with poorer executive function. This effect was particularly strong in calm, predictable environments, but was not evident in chaotic environments. Conclusion Maternal executive function is critical to minimizing harsh parenting in the context of challenging child behavior, but this self-regulation process may not operate well in chaotic environments. PMID:22764829

  13. The Impact of Programming Experience on Successfully Learning Systems Analysis and Design

    ERIC Educational Resources Information Center

    Wong, Wang-chan

    2015-01-01

    In this paper, the author reports the results of an empirical study on the relationship between a student's programming experience and their success in a traditional Systems Analysis and Design (SA&D) class where technical skills such as dataflow analysis and entity relationship data modeling are covered. While it is possible to teach these…

  14. Functional language and data flow architectures

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Patel, D. R.; Lang, T.

    1983-01-01

    This is a tutorial article about language and architecture approaches for highly concurrent computer systems based on the functional style of programming. The discussion concentrates on the basic aspects of functional languages, and sequencing models such as data-flow, demand-driven and reduction which are essential at the machine organization level. Several examples of highly concurrent machines are described.

  15. A Simple Example of an SADMT (SDI-Strategic Defense Initiative) Architecture Dataflow Modeling Technique) Architecture Specification. Version 1.5.

    DTIC Science & Technology

    1988-04-21

    Layton Senior Software Engineer Martin Marietta Denver Aerospace MS L0425 P.O. Box 179 Denver, CO 80201 Larry L. Lehman Integrated Systems Inc. 2500...Mission College Road Santa Clara, CA 95054 Eric Leighninger Dynamics Research 60 Frontage Road Andover, MA 01810 . Peter Lempp Software Products and

  16. Integrated Topside (InTop) Joint Navy - Industry Open Architecture Study

    DTIC Science & Technology

    2010-09-10

    57  Fig. 6.1-1 — Modified VRT dataflow key...68  Fig. 6.1-2 — Sample building block description using VRT nomenclature...converter (RF/IF) and the IF to RF converter (IF/RF) uses the VITA-49 format, also referred to as VRT (VITA Radio Transport), for real-time flow of signal

  17. Roadblocks to Change: Executive Behaviors Versus Executive Perceptions.

    ERIC Educational Resources Information Center

    Harris, Thomas E.

    A study analyzed the responses of chief executive officers (CEOs) and company presidents to a leadership test and an organizational environment test, to determine whether these individuals' managerial approaches coincided with their characterizations of their organizations' environments. Subjects, CEOs or presidents of 65 randomly selected…

  18. Resource Management for the Tagged Token Dataflow Architecture.

    DTIC Science & Technology

    1985-01-01

    completely rigorous, formulation of the U- intepreter . 2The graph schemata presented here differ slightly from those presented in the references...Director Dr. E.B. Royce, Code 38 1 Copy Head, Research Department Naval Weapons Center China Lake, CA 93555 Dr. G. Hopper, USNR 1 Ccpy NAVDAC-OOH .O Department of the Navy " - Washington, DC 20374 .. 0 " FILMED 7-85 DTIC

  19. Development and exemplification of a model for Teacher Assessment in Primary Science

    NASA Astrophysics Data System (ADS)

    Davies, D. J.; Earle, S.; McMahon, K.; Howe, A.; Collier, C.

    2017-09-01

    The Teacher Assessment in Primary Science project is funded by the Primary Science Teaching Trust and based at Bath Spa University. The study aims to develop a whole-school model of valid, reliable and manageable teacher assessment to inform practice and make a positive impact on primary-aged children's learning in science. The model is based on a data-flow 'pyramid' (analogous to the flow of energy through an ecosystem), whereby the rich formative assessment evidence gathered in the classroom is summarised for monitoring, reporting and evaluation purposes [Nuffield Foundation. (2012). Developing policy, principles and practice in primary school science assessment. London: Nuffield Foundation]. Using a design-based research (DBR) methodology, the authors worked in collaboration with teachers from project schools and other expert groups to refine, elaborate, validate and operationalise the data-flow 'pyramid' model, resulting in the development of a whole-school self-evaluation tool. In this paper, we argue that a DBR approach to theory-building and school improvement drawing upon teacher expertise has led to the identification, adaptation and successful scaling up of a promising approach to school self-evaluation in relation to assessment in science.

  20. The Jet Propulsion Laboratory shared control architecture and implementation

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Hayati, Samad

    1990-01-01

    A hardware and software environment for shared control of telerobot task execution has been implemented. Modes of task execution range from fully teleoperated to fully autonomous as well as shared where hand controller inputs from the human operator are mixed with autonomous system inputs in real time. The objective of the shared control environment is to aid the telerobot operator during task execution by merging real-time operator control from hand controllers with autonomous control to simplify task execution for the operator. The operator is the principal command source and can assign as much autonomy for a task as desired. The shared control hardware environment consists of two PUMA 560 robots, two 6-axis force reflecting hand controllers, Universal Motor Controllers for each of the robots and hand controllers, a SUN4 computer, and VME chassis containing 68020 processors and input/output boards. The operator interface for shared control, the User Macro Interface (UMI), is a menu driven interface to design a task and assign the levels of teleoperated and autonomous control. The operator also sets up the system monitor which checks safety limits during task execution. Cartesian-space degrees of freedom for teleoperated and/or autonomous control inputs are selected within UMI as well as the weightings for the teleoperation and autonmous inputs. These are then used during task execution to determine the mix of teleoperation and autonomous inputs. Some of the autonomous control primitives available to the user are Joint-Guarded-Move, Cartesian-Guarded-Move, Move-To-Touch, Pin-Insertion/Removal, Door/Crank-Turn, Bolt-Turn, and Slide. The operator can execute a task using pure teleoperation or mix control execution from the autonomous primitives with teleoperated inputs. Presently the shared control environment supports single arm task execution. Work is presently underway to provide the shared control environment for dual arm control. Teleoperation during shared control is only Cartesian space control and no force-reflection is provided. Force-reflecting teleoperation and joint space operator inputs are planned extensions to the environment.

  1. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  2. 48 CFR 970.5223-1 - Integration of environment, safety, and health into work planning and execution.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Integration of environment, safety, and health into work planning and execution. As prescribed in 970.2303-3(b), insert the following clause: Integration of Environment, Safety, and Health Into Work Planning and... danger to the environment or health and safety of employees or the public, the Contracting Officer may...

  3. 48 CFR 970.5223-1 - Integration of environment, safety, and health into work planning and execution.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Integration of environment, safety, and health into work planning and execution. As prescribed in 970.2303-3(b), insert the following clause: Integration of Environment, Safety, and Health Into Work Planning and... danger to the environment or health and safety of employees or the public, the Contracting Officer may...

  4. 48 CFR 970.5223-1 - Integration of environment, safety, and health into work planning and execution.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Integration of environment, safety, and health into work planning and execution. As prescribed in 970.2303-3(b), insert the following clause: Integration of Environment, Safety, and Health Into Work Planning and... danger to the environment or health and safety of employees or the public, the Contracting Officer may...

  5. 48 CFR 970.5223-1 - Integration of environment, safety, and health into work planning and execution.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Integration of environment, safety, and health into work planning and execution. As prescribed in 970.2303-3(b), insert the following clause: Integration of Environment, Safety, and Health Into Work Planning and... danger to the environment or health and safety of employees or the public, the Contracting Officer may...

  6. 48 CFR 970.5223-1 - Integration of environment, safety, and health into work planning and execution.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Integration of environment... Integration of environment, safety, and health into work planning and execution. As prescribed in 970.2303-3(b), insert the following clause: Integration of Environment, Safety, and Health Into Work Planning and...

  7. An evidence-based structure for transformative nurse executive practice: the model of the interrelationship of leadership, environments, and outcomes for nurse executives (MILE ONE).

    PubMed

    Adams, Jeffrey M; Erickson, Jeanette Ives; Jones, Dorothy A; Paulo, Lisa

    2009-01-01

    Identifying and measuring success within the chief nurse executive (CNE) population have proven complex and challenging for nurse executive educators, policy makers, practitioners, researchers, theory developers, and their constituents. The model of the interrelationship of leadership, environments, and outcomes for nurse executives (MILE ONE) was developed using the concept of consilience (jumping together of ideas) toward limiting the ambiguity surrounding CNE success. The MILE ONE is unique in that it links existing evidence and identifies the continuous and dependent interrelationship among 3 content areas: (1) CNE; (2) nurses' professional practice and work environments; and (3) patient and organizational outcomes. The MILE ONE was developed to operationalize nurse executive influence, define measurement of CNE success, and provide a framework to articulate for patient, workforce, and organizational outcome improvement efforts. This article describes the MILE ONE and highlights the evidence base structure used in its development.

  8. Applying the Model of the Interrelationship of Leadership Environments and Outcomes for Nurse Executives: a community hospital's exemplar in developing staff nurse engagement through documentation improvement initiatives.

    PubMed

    Adams, Jeffrey M; Denham, Debra; Neumeister, Irene Ramirez

    2010-01-01

    The Model of the Interrelationship of Leadership, Environments & Outcomes for Nurse Executives (MILE ONE) was developed on the basis of existing literature related to identifying strategies for simultaneous improvement of leadership, professional practice/work environments (PPWE), and outcomes. Through existing evidence, the MILE ONE identifies the continuous and dependent interrelationship of 3 distinct concept areas: (1) nurse executives influence PPWE, (2) PPWE influence patient and organizational outcomes, and (3) patient and organizational outcomes influence nurse executives. This article highlights the application of the MILE ONE framework to a community district hospital's clinical documentation performance improvement projects. Results suggest that the MILE ONE is a valid and useful framework yielding both anticipated and unexpected enhancements to leaders, environments, and outcomes.

  9. Environment Modeling Using Runtime Values for JPF-Android

    NASA Technical Reports Server (NTRS)

    van der Merwe, Heila; Tkachuk, Oksana; Nel, Seal; van der Merwe, Brink; Visser, Willem

    2015-01-01

    Software applications are developed to be executed in a specific environment. This environment includes external native libraries to add functionality to the application and drivers to fire the application execution. For testing and verification, the environment of an application is simplified abstracted using models or stubs. Empty stubs, returning default values, are simple to generate automatically, but they do not perform well when the application expects specific return values. Symbolic execution is used to find input parameters for drivers and return values for library stubs, but it struggles to detect the values of complex objects. In this work-in-progress paper, we explore an approach to generate drivers and stubs based on values collected during runtime instead of using default values. Entry-points and methods that need to be modeled are instrumented to log their parameters and return values. The instrumented applications are then executed using a driver and instrumented libraries. The values collected during runtime are used to generate driver and stub values on- the-fly that improve coverage during verification by enabling the execution of code that previously crashed or was missed. We are implementing this approach to improve the environment model of JPF-Android, our model checking and analysis tool for Android applications.

  10. High Temperature Tribometer. Phase 1

    DTIC Science & Technology

    1989-06-01

    13 Figure 2.3.2 Setpoint and Gain Windows in FW.EXE ......... . Figure 2.4.1 Data-Flow Diagram for Data-Acquisition Module ..... .. 23 I Figure...mounted in a friction force measuring device. Optimally , material testing results should not be test machine sensitiye; but due to equipment variables...fixed. The friction force due to sliding should be continuously measured. This is optimally done in conjunction with the normal force measurement via

  11. An Advanced Commanding and Telemetry System

    NASA Astrophysics Data System (ADS)

    Hill, Maxwell G. G.

    The Loral Instrumentation System 500 configured as an Advanced Commanding and Telemetry System (ACTS) supports the acquisition of multiple telemetry downlink streams, and simultaneously supports multiple uplink command streams for today's satellite vehicles. By using industry and federal standards, the system is able to support, without relying on a host computer, a true distributed dataflow architecture that is complemented by state-of-the-art RISC-based workstations and file servers.

  12. Software for the EVLA

    NASA Astrophysics Data System (ADS)

    Butler, Bryan J.; van Moorsel, Gustaaf; Tody, Doug

    2004-09-01

    The Expanded Very Large Array (EVLA) project is the next generation instrument for high resolution long-millimeter to short-meter wavelength radio astronomy. It is currently funded by NSF, with completion scheduled for 2012. The EVLA will upgrade the VLA with new feeds, receivers, data transmission hardware, correlator, and a new software system to enable the instrument to achieve its full potential. This software includes both that required for controlling and monitoring the instrument and that involved with the scientific dataflow. We concentrate here on a portion of the dataflow software, including: proposal preparation, submission, and handling; observation preparation, scheduling, and remote monitoring; data archiving; and data post-processing, including both automated (pipeline) and manual processing. The primary goals of the software are: to maximize the scientific return of the EVLA; provide ease of use, for both novices and experts; exploit commonality amongst all NRAO telescopes where possible. This last point is both a bane and a blessing: we are not at liberty to do whatever we want in the software, but on the other hand we may borrow from other projects (notably ALMA and GBT) where appropriate. The software design methodology includes detailed initial use-cases and requirements from the scientists, intimate interaction between the scientists and the programmers during design and implementation, and a thorough testing and acceptance plan.

  13. eHive: an artificial intelligence workflow system for genomic analysis.

    PubMed

    Severin, Jessica; Beal, Kathryn; Vilella, Albert J; Fitzgerald, Stephen; Schuster, Michael; Gordon, Leo; Ureta-Vidal, Abel; Flicek, Paul; Herrero, Javier

    2010-05-11

    The Ensembl project produces updates to its comparative genomics resources with each of its several releases per year. During each release cycle approximately two weeks are allocated to generate all the genomic alignments and the protein homology predictions. The number of calculations required for this task grows approximately quadratically with the number of species. We currently support 50 species in Ensembl and we expect the number to continue to grow in the future. We present eHive, a new fault tolerant distributed processing system initially designed to support comparative genomic analysis, based on blackboard systems, network distributed autonomous agents, dataflow graphs and block-branch diagrams. In the eHive system a MySQL database serves as the central blackboard and the autonomous agent, a Perl script, queries the system and runs jobs as required. The system allows us to define dataflow and branching rules to suit all our production pipelines. We describe the implementation of three pipelines: (1) pairwise whole genome alignments, (2) multiple whole genome alignments and (3) gene trees with protein homology inference. Finally, we show the efficiency of the system in real case scenarios. eHive allows us to produce computationally demanding results in a reliable and efficient way with minimal supervision and high throughput. Further documentation is available at: http://www.ensembl.org/info/docs/eHive/.

  14. Debugging expert systems using a dynamically created hypertext network

    NASA Technical Reports Server (NTRS)

    Boyle, Craig D. B.; Schuette, John F.

    1991-01-01

    The labor intensive nature of expert system writing and debugging motivated this study. The hypothesis is that a hypertext based debugging tool is easier and faster than one traditional tool, the graphical execution trace. HESDE (Hypertext Expert System Debugging Environment) uses Hypertext nodes and links to represent the objects and their relationships created during the execution of a rule based expert system. HESDE operates transparently on top of the CLIPS (C Language Integrated Production System) rule based system environment and is used during the knowledge base debugging process. During the execution process HESDE builds an execution trace. Use of facts, rules, and their values are automatically stored in a Hypertext network for each execution cycle. After the execution process, the knowledge engineer may access the Hypertext network and browse the network created. The network may be viewed in terms of rules, facts, and values. An experiment was conducted to compare HESDE with a graphical debugging environment. Subjects were given representative tasks. For speed and accuracy, in eight of the eleven tasks given to subjects, HESDE was significantly better.

  15. 48 CFR 952.223-71 - Integration of environment, safety, and health into work planning and execution.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Integration of environment, safety, and health into work planning and execution. 952.223-71 Section 952.223-71 Federal Acquisition... Provisions and Clauses 952.223-71 Integration of environment, safety, and health into work planning and...

  16. Research into software executives for space operations support

    NASA Technical Reports Server (NTRS)

    Collier, Mark D.

    1990-01-01

    Research concepts pertaining to a software (workstation) executive which will support a distributed processing command and control system characterized by high-performance graphics workstations used as computing nodes are presented. Although a workstation-based distributed processing environment offers many advantages, it also introduces a number of new concerns. In order to solve these problems, allow the environment to function as an integrated system, and present a functional development environment to application programmers, it is necessary to develop an additional layer of software. This 'executive' software integrates the system, provides real-time capabilities, and provides the tools necessary to support the application requirements.

  17. Can reactivity to stress and family environment explain memory and executive function performance in early and middle childhood?

    PubMed

    Piccolo, Luciane da Rosa; Salles, Jerusa Fumagalli de; Falceto, Olga Garcia; Fernandes, Carmen Luiza; Grassi-Oliveira, Rodrigo

    2016-01-01

    According to the literature, children's overall reactivity to stress is associated with their socioeconomic status and family environment. In turn, it has been shown that reactivity to stress is associated with cognitive performance. However, few studies have systematically tested these three constructs together. To investigate the relationship between family environment, salivary cortisol measurements and children's memory and executive function performance. Salivary cortisol levels of 70 children aged 9 or 10 years were measured before and after performing tasks designed to assess memory and executive functions. Questionnaires on socioeconomic issues, family environment and maternal psychopathologies were administered to participants' families during the children's early childhood and again when they reached school age. Data were analyzed by calculating correlations between variables and conducting hierarchical regression. High cortisol levels were associated with poorer working memory and worse performance in tasks involving executive functions, and were also associated with high scores for maternal psychopathology (during early childhood and school age) and family dysfunction. Family environment variables and changes in cortisol levels explain around 20% of the variance in performance of cognitive tasks. Family functioning and maternal psychopathology in early and middle childhood and children's stress levels were associated with children's working memory and executive functioning.

  18. The Environment for Application Software Integration and Execution (EASIE) version 1.0. Volume 1: Executive overview

    NASA Technical Reports Server (NTRS)

    Rowell, Lawrence F.; Davis, John S.

    1989-01-01

    The Environment for Application Software Integration and Execution (EASIE) provides a methodology and a set of software utility programs to ease the task of coordinating engineering design and analysis codes. EASIE was designed to meet the needs of conceptual design engineers that face the task of integrating many stand-alone engineering analysis programs. Using EASIE, programs are integrated through a relational database management system. Volume 1, Executive Overview, gives an overview of the functions provided by EASIE and describes their use. Three operational design systems based upon the EASIE software are briefly described.

  19. Turtlegraphics: A Comparison of Logo and Turbo Pascal.

    ERIC Educational Resources Information Center

    VanLengen, Craig A.

    1989-01-01

    The integrated compiler of the Turbo Pascal environment allows the execution of a completed program independent of the developed environment and with greater execution speed, in comparison with LOGO. Conversion table of turtle-graphic commands for the two languages is presented. (Author/YP)

  20. Deployment of a Prototype Plant GFP Imager at the Arthur Clarke Mars Greenhouse of the Haughton Mars Project.

    PubMed

    Paul, Anna-Lisa; Bamsey, Matthew; Berinstain, Alain; Braham, Stephen; Neron, Philip; Murdoch, Trevor; Graham, Thomas; Ferl, Robert J

    2008-04-18

    The use of engineered plants as biosensors has made elegant strides in the past decades, providing keen insights into the health of plants in general and particularly in the nature and cellular location of stress responses. However, most of the analytical procedures involve laboratory examination of the biosensor plants. With the advent of the green fluorescence protein (GFP) as a biosensor molecule, it became at least theoretically possible for analyses of gene expression to occur telemetrically, with the gene expression information of the plant delivered to the investigator over large distances simply as properly processed fluorescence images. Spaceflight and other extraterrestrial environments provide unique challenges to plant life, challenges that often require changes at the gene expression level to accommodate adaptation and survival. Having previously deployed transgenic plant biosensors to evaluate responses to orbital spaceflight, we wished to develop the plants and especially the imaging devices required to conduct such experiments robotically, without operator intervention, within extraterrestrial environments. This requires the development of an autonomous and remotely operated plant GFP imaging system and concomitant development of the communications infrastructure to manage dataflow from the imaging device. Here we report the results of deploying a prototype GFP imaging system within the Arthur Clarke Mars Greenhouse (ACMG) an autonomously operated greenhouse located within the Haughton Mars Project in the Canadian High Arctic. Results both demonstrate the applicability of the fundamental GFP biosensor technology and highlight the difficulties in collecting and managing telemetric data from challenging deployment environments.

  1. Design and implementation of the GLIF3 guideline execution engine.

    PubMed

    Wang, Dongwen; Peleg, Mor; Tu, Samson W; Boxwala, Aziz A; Ogunyemi, Omolola; Zeng, Qing; Greenes, Robert A; Patel, Vimla L; Shortliffe, Edward H

    2004-10-01

    We have developed the GLIF3 Guideline Execution Engine (GLEE) as a tool for executing guidelines encoded in the GLIF3 format. In addition to serving as an interface to the GLIF3 guideline representation model to support the specified functions, GLEE provides defined interfaces to electronic medical records (EMRs) and other clinical applications to facilitate its integration with the clinical information system at a local institution. The execution model of GLEE takes the "system suggests, user controls" approach. A tracing system is used to record an individual patient's state when a guideline is applied to that patient. GLEE can also support an event-driven execution model once it is linked to the clinical event monitor in a local environment. Evaluation has shown that GLEE can be used effectively for proper execution of guidelines encoded in the GLIF3 format. When using it to execute each guideline in the evaluation, GLEE's performance duplicated that of the reference systems implementing the same guideline but taking different approaches. The execution flexibility and generality provided by GLEE, and its integration with a local environment, need to be further evaluated in clinical settings. Integration of GLEE with a specific event-monitoring and order-entry environment is the next step of our work to demonstrate its use for clinical decision support. Potential uses of GLEE also include quality assurance, guideline development, and medical education.

  2. Mission Command In A Communications Denied Environment

    DTIC Science & Technology

    2017-02-16

    AIR WAR COLLEGE AIR UNIVERSITY MISSION COMMAND IN A COMMUNICATIONS DENIED ENVIRONMENT by Ramon Ahrens, Lieutenant Colonel, GAF A...centralized execution. Mission Command is particularly helpful in communication denied environments . This paper shows the advantages in situations where...Mission Command needs to be practiced and executed in peacetime for it to work during real world operations. The United States armed forces are

  3. Scalable and Accurate SMT-based Model Checking of Data Flow Systems

    DTIC Science & Technology

    2013-10-30

    guided by the semantics of the description language . In this project we developed instead a complementary and novel approach based on a somewhat brute...believe that our approach could help considerably in expanding the reach of abstract interpretation techniques to a variety of tar- get languages , as...project. We worked on developing a framework for compositional verification that capitalizes on the fact that data-flow languages , such as Lustre, have

  4. Communication-Driven Codesign for Multiprocessor Systems

    DTIC Science & Technology

    2004-01-01

    processors, FPGA or ASIC subsystems, mi- croprocessors, and microcontrollers. When a processor is embedded within a SLOT architecture, one or more...Broderson, Low-power CMOS digital design, IEEE Journal of Solid-State Circuits 27 (1992), no. 4, 473–484. [25] L. Chao and E. Sha , Scheduling data-flow...1997), 239– 256 . [82] P. K. Murthy, E. G. Cohen, and S. Rowland, System Canvas: A new design en- vironment for embedded DSP and telecommunications

  5. Design of Arithmetic Circuits for Complex Binary Number System

    NASA Astrophysics Data System (ADS)

    Jamil, Tariq

    2011-08-01

    Complex numbers play important role in various engineering applications. To represent these numbers efficiently for storage and manipulation, a (-1+j)-base complex binary number system (CBNS) has been proposed in the literature. In this paper, designs of nibble-size arithmetic circuits (adder, subtractor, multiplier, divider) have been presented. These circuits can be incorporated within von Neumann and associative dataflow processors to achieve higher performance in both sequential and parallel computing paradigms.

  6. Co Modeling and Co Synthesis of Safety Critical Multi threaded Embedded Software for Multi Core Embedded Platforms

    DTIC Science & Technology

    2017-03-20

    computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing   data sources, gathering and

  7. Checking for Circular Dependencies in Distributed Stream Programs

    DTIC Science & Technology

    2011-08-29

    extensions to express new complexities more conve- nient. Teleport messaging ( TMG ) in the StreamIt language [30] is an example. 1.1 StreamIt Language...dynamicities to an FIR computation Thies et al. in [30] give a TMG model for distributed stream pro- grams. TMG is a mechanism that implements control...messages for stream graphs. The TMG mechanism is designed not to interfere with original dataflow graphs’ structures and scheduling, therefore a key

  8. Topological Patterns for Scalable Representation and Analysis of Dataflow Graphs

    DTIC Science & Technology

    2011-11-01

    dimensional mesh structure. Such a structure is of particular use to model DSP architectures in which data flows across a network of processing elements...ACSSC.1998.751616 3. Andrews, J.G., Ghosh, A., Muhamed, R.: Fundamentals of WiMAX: understanding broad- band wireless networking . Prentice Hall (2007... SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 23 19a. NAME OF RESPONSIBLE PERSON a. REPORT

  9. A strategy for automatically generating programs in the lucid programming language

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1987-01-01

    A strategy for automatically generating and verifying simple computer programs is described. The programs are specified by a precondition and a postcondition in predicate calculus. The programs generated are in the Lucid programming language, a high-level, data-flow language known for its attractive mathematical properties and ease of program verification. The Lucid programming is described, and the automatic program generation strategy is described and applied to several example problems.

  10. Addressing Modeling Challenges in Cyber-Physical Systems

    DTIC Science & Technology

    2011-03-04

    A. Lee and Eleftherios Matsikoudis. The semantics of dataflow with firing. In Grard Huet, Gordon Plotkin, Jean - Jacques Lévy, and Yves Bertot...Computer-Aided Design of Integrated Circuits and Systems, 20(3), 2001. [12] Luca P. Carloni, Roberto Passerone, Alessandro Pinto , and Alberto Sangiovanni...gst/fullpage.html?res= 9504EFDA1738F933A2575AC0A9679C8B63. 20 [15] Abhijit Davare, Douglas Densmore, Trevor Meyerowitz, Alessandro Pinto , Alberto

  11. Dataflow-Based Implementation of Layered Sensing Applications on High-Performance Embedded Processors

    DTIC Science & Technology

    2013-03-01

    time (milliseconds) GFlops Comparison to GPU peak performance (%) Cascade Gaussian Filtering 13 45.19 6.3 Difference of Gaussian 0.512 152...values for the GPU-targeted actor implementations in terms of Giga Floating Point Operations Per Second ( GFLOPS ). Our GFLOPS calculation for an actor...kernels. The results for GFLOPS are provided in Table . The actors were implemented on an NVIDIA GTX260 GPU, which provides 715 GFLOPS as peak

  12. Mapping Parameterized Dataflow Graphs onto FPGA Platforms (Preprint)

    DTIC Science & Technology

    2014-02-01

    Shen , Nimish Sane, William Plishker, Shuvra S. Bhattacharyya (University of Maryland) Hojin Kee (National Instruments) 5d. PROJECT NUMBER T2MC 5e...Rodyushkin, A. Ku - ranov, and V. Eruhimov. Computer vision workload analysis: Case study of video surveillance systems. Intel Technology Journal, 9, 2005...Prototyping, pages 1–7, Fairfax, Virginia, June 2010. [56] H. Wu, C. Shen , S. S. Bhattacharyya, K. Compton, M. Schulte, M. Wolf, and T. Zhang. Design and

  13. Security model for picture archiving and communication systems.

    PubMed

    Harding, D B; Gac, R J; Reynolds, C T; Romlein, J; Chacko, A K

    2000-05-01

    The modern information revolution has facilitated a metamorphosis of health care delivery wrought with the challenges of securing patient sensitive data. To accommodate this reality, Congress passed the Health Insurance Portability and Accountability Act (HIPAA). While final guidance has not fully been resolved at this time, it is up to the health care community to develop and implement comprehensive security strategies founded on procedural, hardware and software solutions in preparation for future controls. The Virtual Radiology Environment (VRE) Project, a landmark US Army picture archiving and communications system (PACS) implemented across 10 geographically dispersed medical facilities, has addressed that challenge by planning for the secure transmission of medical images and reports over their local (LAN) and wide area network (WAN) infrastructure. Their model, which is transferable to general PACS implementations, encompasses a strategy of application risk and dataflow identification, data auditing, security policy definition, and procedural controls. When combined with hardware and software solutions that are both non-performance limiting and scalable, the comprehensive approach will not only sufficiently address the current security requirements, but also accommodate the natural evolution of the enterprise security model.

  14. Contribution of Family Environment to Pediatric Cochlear Implant Users’ Speech and Language Outcomes: Some Preliminary Findings

    PubMed Central

    Holt, Rachael Frush; Beer, Jessica; Kronenberger, William G.; Pisoni, David B.; Lalonde, Kaylah

    2012-01-01

    Purpose To evaluate the family environments of children with cochlear implants and to examine relationships between family environment and post-implant language development and executive function. Method Forty-five families of children with cochlear implants completed a self-report family environment questionnaire (FES) and an inventory of executive function (BRIEF/BRIEF-P). Children’s receptive vocabulary (PPVT-4) and global language skills (PLS-4/CELF-4) were also evaluated. Results The family environments of children with cochlear implants differed from those of normal-hearing children, but not in clinically significant ways. Language development and executive function were found to be atypical, but not uncharacteristic of this clinical population. Families with higher levels of self-reported control had children with smaller vocabularies. Families reporting a higher emphasis on achievement had children with fewer executive function and working memory problems. Finally, families reporting a higher emphasis on organization had children with fewer problems related to inhibition. Conclusions Some of the variability in cochlear implantation outcomes that have protracted periods of development is related to family environment. Because family environment can be modified and enhanced by therapy or education, these preliminary findings hold promise for future work in helping families to create robust language-learning environments that can maximize their child’s potential with a cochlear implant. PMID:22232387

  15. Children's Elementary School Social Experience and Executive Functions Development: Introduction to a Special Section.

    PubMed

    van Lier, Pol A C; Deater-Deckard, Kirby

    2016-01-01

    Children's executive functions, encompassing inhibitory control, working memory and attention are vital for their self-regulation. With the transition to formal schooling, children need to learn to manage their emotions and behavior in a new and complex social environment that with age increases in the intensity of social interactions with peers and teachers. Stronger executive functions skills facilitate children's social development. In addition, new experiences in the social environments of school also may influence executive function development. The focus of this special section is on this potential impact of elementary school social experiences with peers and teacher on the development of children's executive functions. The collection of papers encompass various aspects of peer and teacher social environments, and cover broad as well as specific facets and measures of executive functions including neural responses. The collection of papers sample developmental periods that span preschool through mid-adolescence. In this introduction, we summarize and highlight the main findings of each of the papers, organized around social interactions with peers and interactions with teachers. We conclude our synopsis with implications for future research, and a specific focus on prevention and intervention.

  16. Intelligent Rover Execution for Detecting Life in the Atacama Desert

    NASA Technical Reports Server (NTRS)

    Baskaran, Vijayakumar; Muscettola, Nicola; Rijsman, David; Plaunt, Chris; Fry, Chuck

    2006-01-01

    On-board supervisory execution is crucial for the deployment of more capable and autonomous remote explorers. Planetary science is considering robotic explorers operating for long periods of time without ground supervision while interacting with a changing and often hostile environment. Effective and robust operations require on-board supervisory control with a high level of awareness of the principles of functioning of the environment and of the numerous internal subsystems that need to be coordinated. We describe an on-board rover executive that was deployed on a rover as past of the "Limits of Life in the Atacama Desert (LITA)" field campaign sponsored by the NASA ASTEP program. The executive was built using the Intelligent Distributed Execution Architecture (IDEA), an execution framework that uses model-based and plan-based supervisory control of its fundamental computational paradigm. We present the results of the third field experiment conducted in the Atacama desert (Chile) in August - October 2005.

  17. Linking innovative measurement technologies (ConMon and Dataflow© systems) for high-resolution temporal and spatial dissolved oxygen criteria assessment.

    PubMed

    O'Leary, C A; Perry, E; Bayard, A; Wainger, L; Boynton, W R

    2015-10-01

    One consequence of nutrient-induced eutrophication in shallow estuarine waters is the occurrence of hypoxia and anoxia that has serious impacts on biota, habitats, and biogeochemical cycles of important elements. Because of the important role of dissolved oxygen (DO) on these ecosystem features, a variety of DO criteria have been established as indicators of system condition. However, DO dynamics are complex and vary on time scales ranging from diel to decadal and spatial scales from meters to multiple kilometers. Because of these complexities, determining DO criteria attainment or failure remains difficult. We propose a method for linking two common measurement technologies for shallow water DO criteria assessment using a Chesapeake Bay tributary as a test case. Dataflow© is a spatially intensive (30-60-m collection intervals) system used to map surface water conditions at the whole estuary scale, and ConMon is a high-frequency (15-min collection intervals) fixed station approach. The former technology is effective with spatial descriptions but poor regarding temporal resolution, while the latter provides excellent temporal but very limited spatial resolution. Our methodology for combining the strengths of these measurement technologies involved a sequence of steps. First, a statistical model of surface water DO dynamics, based on temporally intense ConMon data, was developed. The results of this model were used to calculate daily DO minimum concentrations. Second, this model was then inserted into Dataflow©-generated spatial maps of DO conditions and used to adjust measured DO concentrations to daily minimum concentrations. This information was used to assess DO criteria compliance at the full tributary scale. Model results indicated that it is vital to consider the short-term time scale DO criteria across both space and time concurrently. Large fluctuations in DO occurred within a 24-h time period, and DO dynamics varied across the length and width of the tributary. The overall result provided a more detailed and realistic characterization of the shallow water DO minimum conditions that have the potential to be extended to other tributaries and regions. Broader applications of this model include instantaneous DO criteria assessment, utilizing this model in combination with aerial remote sensing, and developing DO amplitude as an indicator of impaired water bodies.

  18. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  19. eHive: An Artificial Intelligence workflow system for genomic analysis

    PubMed Central

    2010-01-01

    Background The Ensembl project produces updates to its comparative genomics resources with each of its several releases per year. During each release cycle approximately two weeks are allocated to generate all the genomic alignments and the protein homology predictions. The number of calculations required for this task grows approximately quadratically with the number of species. We currently support 50 species in Ensembl and we expect the number to continue to grow in the future. Results We present eHive, a new fault tolerant distributed processing system initially designed to support comparative genomic analysis, based on blackboard systems, network distributed autonomous agents, dataflow graphs and block-branch diagrams. In the eHive system a MySQL database serves as the central blackboard and the autonomous agent, a Perl script, queries the system and runs jobs as required. The system allows us to define dataflow and branching rules to suit all our production pipelines. We describe the implementation of three pipelines: (1) pairwise whole genome alignments, (2) multiple whole genome alignments and (3) gene trees with protein homology inference. Finally, we show the efficiency of the system in real case scenarios. Conclusions eHive allows us to produce computationally demanding results in a reliable and efficient way with minimal supervision and high throughput. Further documentation is available at: http://www.ensembl.org/info/docs/eHive/. PMID:20459813

  20. A Collaborative Extensible User Environment for Simulation and Knowledge Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Lansing, Carina S.; Porter, Ellen A.

    2015-06-01

    In scientific simulation, scientists use measured data to create numerical models, execute simulations and analyze results from advanced simulators executing on high performance computing platforms. This process usually requires a team of scientists collaborating on data collection, model creation and analysis, and on authorship of publications and data. This paper shows that scientific teams can benefit from a user environment called Akuna that permits subsurface scientists in disparate locations to collaborate on numerical modeling and analysis projects. The Akuna user environment is built on the Velo framework that provides both a rich client environment for conducting and analyzing simulations andmore » a Web environment for data sharing and annotation. Akuna is an extensible toolset that integrates with Velo, and is designed to support any type of simulator. This is achieved through data-driven user interface generation, use of a customizable knowledge management platform, and an extensible framework for simulation execution, monitoring and analysis. This paper describes how the customized Velo content management system and the Akuna toolset are used to integrate and enhance an effective collaborative research and application environment. The extensible architecture of Akuna is also described and demonstrates its usage for creation and execution of a 3D subsurface simulation.« less

  1. English Business Communication Needs of Mexican Executives in a Distance-Learning Class

    ERIC Educational Resources Information Center

    Grosse, Christine Uber

    2004-01-01

    Many firms within and outside the United States operate in multilingual environments that require executives to do business in English as well as in other languages. Executives for whom English is a second language often face special challenges communicating in such settings. This study examines how 115 executives in a distance-learning business…

  2. Executive Perceptions on International Education in a Globalized Environment: The Travel Industry's Point of View

    ERIC Educational Resources Information Center

    Munoz, J. Mark; Katsioloudes, Marios I.

    2004-01-01

    Research on globalization has determined travel executives' perceptions of the psychological implications brought about by an interconnected global environment and the implications on international education. With the concepts of Clyne and Rizvi (1998) and Pittaway, Ferguson, and Breen (1998) on the value of cross-cultural interaction as a…

  3. 48 CFR 952.223-71 - Integration of environment, safety, and health into work planning and execution.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Provisions and Clauses 952.223-71 Integration of environment, safety, and health into work planning and..., safety, and health into work planning and execution. 952.223-71 Section 952.223-71 Federal Acquisition... safety and health standards applicable to the work conditions of contractor and subcontractor employees...

  4. 48 CFR 952.223-71 - Integration of environment, safety, and health into work planning and execution.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Provisions and Clauses 952.223-71 Integration of environment, safety, and health into work planning and..., safety, and health into work planning and execution. 952.223-71 Section 952.223-71 Federal Acquisition... safety and health standards applicable to the work conditions of contractor and subcontractor employees...

  5. 48 CFR 952.223-71 - Integration of environment, safety, and health into work planning and execution.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Provisions and Clauses 952.223-71 Integration of environment, safety, and health into work planning and..., safety, and health into work planning and execution. 952.223-71 Section 952.223-71 Federal Acquisition... safety and health standards applicable to the work conditions of contractor and subcontractor employees...

  6. 48 CFR 952.223-71 - Integration of environment, safety, and health into work planning and execution.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Provisions and Clauses 952.223-71 Integration of environment, safety, and health into work planning and..., safety, and health into work planning and execution. 952.223-71 Section 952.223-71 Federal Acquisition... safety and health standards applicable to the work conditions of contractor and subcontractor employees...

  7. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.

    PubMed

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).

  8. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System

    PubMed Central

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997

  9. Executive Information Systems for Providing Next Generation Strategic Information: An Evaluation of EIS (Executive Information System) Software and Recommended Applicability within the FAA Computing Environment

    DTIC Science & Technology

    1989-01-01

    the FAA Computing Environment 7. Author(s) S. Performing Organization Report No. MT/O1-89. Al 9. Performing Organization Name and Address 10. Work Unit...him in advance by analysts and developers -- an electronic3 version of the Performance Indicators report. Ease of Use. pcEXPRESS has an automatic link...overcome within the required timeframe. I These advanced features of the EXPRESS system allow the fastest possible response to changing executive information

  10. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  11. Evaluation of Supported Placements in Integrated Community Environments Project (SPICE). Executive Summary of the Final Report.

    ERIC Educational Resources Information Center

    Wilson, Leslie; And Others

    This executive summary presents highlights of a study which sought to determine whether participants in the Supported Placements in Integrated Community Environments project were better off after moving to community homes from intermediate care facilities and skilled nursing facilities, and to determine the variables that contribute to quality…

  12. Social Factors in the Development of Early Executive Functioning: A Closer Look at the Caregiving Environment

    ERIC Educational Resources Information Center

    Bernier, Annie; Carlson, Stephanie M.; Deschenes, Marie; Matte-Gagne, Celia

    2012-01-01

    This study investigated prospective links between quality of the early caregiving environment and children's subsequent executive functioning (EF). Sixty-two families were met on five occasions, allowing for assessment of maternal interactive behavior, paternal interactive behavior, and child attachment security between 1 and 2 years of age, and…

  13. FRIEDA: Flexible Robust Intelligent Elastic Data Management Framework

    DOE PAGES

    Ghoshal, Devarshi; Hendrix, Valerie; Fox, William; ...

    2017-02-01

    Scientific applications are increasingly using cloud resources for their data analysis workflows. However, managing data effectively and efficiently over these cloud resources is challenging due to the myriad storage choices with different performance, cost trade-offs, complex application choices and complexity associated with elasticity, failure rates in these environments. The different data access patterns for data-intensive scientific applications require a more flexible and robust data management solution than the ones currently in existence. FRIEDA is a Flexible Robust Intelligent Elastic Data Management framework that employs a range of data management strategies in cloud environments. FRIEDA can manage storage and data lifecyclemore » of applications in cloud environments. There are four different stages in the data management lifecycle of FRIEDA – (i) storage planning, (ii) provisioning and preparation, (iii) data placement, and (iv) execution. FRIEDA defines a data control plane and an execution plane. The data control plane defines the data partition and distribution strategy, whereas the execution plane manages the execution of the application using a master-worker paradigm. FRIEDA also provides different data management strategies, either to partition the data in real-time, or predetermine the data partitions prior to application execution.« less

  14. FRIEDA: Flexible Robust Intelligent Elastic Data Management Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoshal, Devarshi; Hendrix, Valerie; Fox, William

    Scientific applications are increasingly using cloud resources for their data analysis workflows. However, managing data effectively and efficiently over these cloud resources is challenging due to the myriad storage choices with different performance, cost trade-offs, complex application choices and complexity associated with elasticity, failure rates in these environments. The different data access patterns for data-intensive scientific applications require a more flexible and robust data management solution than the ones currently in existence. FRIEDA is a Flexible Robust Intelligent Elastic Data Management framework that employs a range of data management strategies in cloud environments. FRIEDA can manage storage and data lifecyclemore » of applications in cloud environments. There are four different stages in the data management lifecycle of FRIEDA – (i) storage planning, (ii) provisioning and preparation, (iii) data placement, and (iv) execution. FRIEDA defines a data control plane and an execution plane. The data control plane defines the data partition and distribution strategy, whereas the execution plane manages the execution of the application using a master-worker paradigm. FRIEDA also provides different data management strategies, either to partition the data in real-time, or predetermine the data partitions prior to application execution.« less

  15. Executive and Intellectual Functioning in School-Aged Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Kuusisto, Marika A.; Nieminen, Pirkko E.; Helminen, Mika T.; Kleemola, Leenamaija

    2017-01-01

    Background: Earlier research and clinical practice show that specific language impairment (SLI) is often associated with nonverbal cognitive deficits and weakened skills in executive functions (EFs). Executive deficits may have a remarkable influence on a child's everyday activities in the home and school environments. However, research…

  16. Democratizing Authority in the Built Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, Michael P; Kolb, John; Chen, Kaifei

    Operating systems and applications in the built environment have relied upon central authorization and management mechanisms which restrict their scalability, especially with respect to administrative overhead. We propose a new set of primitives encompassing syndication, security, and service execution that unifies the management of applications and services across the built environment, while enabling participants to individually delegate privilege across multiple administrative domains with no loss of security or manageability. We show how to leverage a decentralized authorization syndication platform to extend the design of building operating systems beyond the single administrative domain of a building. The authorization system leveraged ismore » based on blockchain smart contracts to permit decentralized and democratized delegation of authorization without central trust. Upon this, a publish/subscribe syndication tier and a containerized service execution environment are constructed. Combined, these mechanisms solve problems of delegation, federation, device protection and service execution that arise throughout the built environment. We leverage a high-fidelity city-scale emulation to verify the scalability of the authorization tier, and briefly describe a prototypical democratized operating system for the built environment using this foundation.« less

  17. A Multiple Case Study: Gauging the Effects of Poverty on School Readiness amongst Preschoolers

    ERIC Educational Resources Information Center

    Onesto, Melissa J.

    2017-01-01

    The home environment, which includes the level of organization and stability in the home, plays a crucial role in the development of executive function and oral language skills. For children who live in a low-SES environment, executive function and oral language acquisition are inferior compared to that of students living at other economic levels.…

  18. 24 CFR 58.5 - Related Federal laws and authorities.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...). (2) Executive Order 11593, Protection and Enhancement of the Cultural Environment, May 13, 1971 (36... section 3 (16 U.S.C. 469a-1). (b) Floodplain management and wetland protection. (1) Executive Order 11988...) Executive Order 11990, Protection of Wetlands, May 24, 1977 (42 FR 26961), 3 CFR, 1977 Comp., p. 121...

  19. 24 CFR 58.5 - Related Federal laws and authorities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...). (2) Executive Order 11593, Protection and Enhancement of the Cultural Environment, May 13, 1971 (36... section 3 (16 U.S.C. 469a-1). (b) Floodplain management and wetland protection. (1) Executive Order 11988...) Executive Order 11990, Protection of Wetlands, May 24, 1977 (42 FR 26961), 3 CFR, 1977 Comp., p. 121...

  20. The Action Execution Process Implemented in Different Cognitive Architectures: A Review

    NASA Astrophysics Data System (ADS)

    Dong, Daqi; Franklin, Stan

    2014-12-01

    An agent achieves its goals by interacting with its environment, cyclically choosing and executing suitable actions. An action execution process is a reasonable and critical part of an entire cognitive architecture, because the process of generating executable motor commands is not only driven by low-level environmental information, but is also initiated and affected by the agent's high-level mental processes. This review focuses on cognitive models of action, or more specifically, of the action execution process, as implemented in a set of popular cognitive architectures. We examine the representations and procedures inside the action execution process, as well as the cooperation between action execution and other high-level cognitive modules. We finally conclude with some general observations regarding the nature of action execution.

  1. Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.

    1994-05-01

    An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.

  2. A very simple, re-executable neuroimaging publication

    PubMed Central

    Ghosh, Satrajit S.; Poline, Jean-Baptiste; Keator, David B.; Halchenko, Yaroslav O.; Thomas, Adam G.; Kessler, Daniel A.; Kennedy, David N.

    2017-01-01

    Reproducible research is a key element of the scientific process. Re-executability of neuroimaging workflows that lead to the conclusions arrived at in the literature has not yet been sufficiently addressed and adopted by the neuroimaging community. In this paper, we document a set of procedures, which include supplemental additions to a manuscript, that unambiguously define the data, workflow, execution environment and results of a neuroimaging analysis, in order to generate a verifiable re-executable publication. Re-executability provides a starting point for examination of the generalizability and reproducibility of a given finding. PMID:28781753

  3. Directed Hidden-Code Extractor for Environment-Sensitive Malwares

    NASA Astrophysics Data System (ADS)

    Jia, Chunfu; Wang, Zhi; Lu, Kai; Liu, Xinhai; Liu, Xin

    Malware writers often use packing technique to hide malicious payload. A number of dynamic unpacking tools are.designed in order to identify and extract the hidden code in the packed malware. However, such unpacking methods.are all based on a highly controlled environment that is vulnerable to various anti-unpacking techniques. If execution.environment is suspicious, malwares may stay inactive for a long time or stop execution immediately to evade.detection. In this paper, we proposed a novel approach that automatically reasons about the environment requirements.imposed by malware, then directs a unpacking tool to change the controlled environment to extract the hide code at.the new environment. The experimental results show that our approach significantly increases the resilience of the.traditional unpacking tools to environment-sensitive malware.

  4. 40 CFR 52.353 - Section 110(a)(2) infrastructure requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) infrastructure requirements. (a) On January 7, 2008, James B. Martin, Executive Director of the Colorado... 4, 2008 James B. Martin, Executive Director, Colorado Department of Public Health and Environment...

  5. Cognitive correlates of spatial navigation: Associations between executive functioning and the virtual Morris Water Task.

    PubMed

    Korthauer, L E; Nowak, N T; Frahmand, M; Driscoll, I

    2017-01-15

    Although effective spatial navigation requires memory for objects and locations, navigating a novel environment may also require considerable executive resources. The present study investigated associations between performance on the virtual Morris Water Task (vMWT), an analog version of a nonhuman spatial navigation task, and neuropsychological tests of executive functioning and spatial performance in 75 healthy young adults. More effective vMWT performance (e.g., lower latency and distance to reach hidden platform, greater distance in goal quadrant on a probe trial, fewer path intersections) was associated with better verbal fluency, set switching, response inhibition, and ability to mentally rotate objects. Findings also support a male advantage in spatial navigation, with sex moderating several associations between vMWT performance and executive abilities. Overall, we report a robust relationship between executive functioning and navigational skill, with some evidence that men and women may differentially recruit cognitive abilities when navigating a novel environment. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Investigating the Contextual Interference Effect Using Combination Sports Skills in Open and Closed Skill Environments

    PubMed Central

    Cheong, Jadeera P.G.; Lay, Brendan; Razman, Rizal

    2016-01-01

    This study attempted to present conditions that were closer to the real-world setting of team sports. The primary purpose was to examine the effects of blocked, random and game-based training practice schedules on the learning of the field hockey trap, close dribble and push pass that were practiced in combination. The secondary purpose was to investigate the effects of predictability of the environment on the learning of field hockey sport skills according to different practice schedules. A game-based training protocol represented a form of random practice in an unstable environment and was compared against a blocked and a traditional random practice schedule. In general, all groups improved dribble and push accuracy performance during the acquisition phase when assessed in a closed environment. In the retention phase, there were no differences between the three groups. When assessed in an open skills environment, all groups improved their percentage of successful executions for trapping and passing execution, and improved total number of attempts and total number of successful executions for both dribbling and shooting execution. Between-group differences were detected for dribbling execution with the game-based group scoring a higher number of dribbling successes. The CI effect did not emerge when practicing and assessing multiple sport skills in a closed skill environment, even when the skills were practiced in combination. However, when skill assessment was conducted in a real-world situation, there appeared to be some support for the CI effect. Key points The contextual interference effect was not supported when practicing several skills in combination when the sports skills were assessed in a closed skill environment. There appeared to be some support for the contextual interference effect when sports skills were assessed in an open skill environment, which were similar to a real game situation. A game-based training schedule can be used as an alternative practice schedule as it displayed superior learning compared to a blocked practice schedule when assessed by the game performance test (real-world setting). The game-based training schedule also matched the blocked and random practice schedules in the other tests. PMID:26957940

  7. Investigating the Contextual Interference Effect Using Combination Sports Skills in Open and Closed Skill Environments.

    PubMed

    Cheong, Jadeera P G; Lay, Brendan; Razman, Rizal

    2016-03-01

    This study attempted to present conditions that were closer to the real-world setting of team sports. The primary purpose was to examine the effects of blocked, random and game-based training practice schedules on the learning of the field hockey trap, close dribble and push pass that were practiced in combination. The secondary purpose was to investigate the effects of predictability of the environment on the learning of field hockey sport skills according to different practice schedules. A game-based training protocol represented a form of random practice in an unstable environment and was compared against a blocked and a traditional random practice schedule. In general, all groups improved dribble and push accuracy performance during the acquisition phase when assessed in a closed environment. In the retention phase, there were no differences between the three groups. When assessed in an open skills environment, all groups improved their percentage of successful executions for trapping and passing execution, and improved total number of attempts and total number of successful executions for both dribbling and shooting execution. Between-group differences were detected for dribbling execution with the game-based group scoring a higher number of dribbling successes. The CI effect did not emerge when practicing and assessing multiple sport skills in a closed skill environment, even when the skills were practiced in combination. However, when skill assessment was conducted in a real-world situation, there appeared to be some support for the CI effect. Key pointsThe contextual interference effect was not supported when practicing several skills in combination when the sports skills were assessed in a closed skill environment.There appeared to be some support for the contextual interference effect when sports skills were assessed in an open skill environment, which were similar to a real game situation.A game-based training schedule can be used as an alternative practice schedule as it displayed superior learning compared to a blocked practice schedule when assessed by the game performance test (real-world setting). The game-based training schedule also matched the blocked and random practice schedules in the other tests.

  8. The Influence of Family Factors on the Executive Functioning of Adult Children of Alcoholics in College

    ERIC Educational Resources Information Center

    Schroeder, Valarie M.; Kelley, Michelle L.

    2008-01-01

    This study examined executive functioning in college aged adult children of alcoholics (ACOAs; n = 84) and non-ACOAs (188). We examined whether characteristics of the family environment and family responsibility in one's family of origin were associated with executive functioning above the contribution of ACOA status. ACOAs reported more…

  9. Execution environment for intelligent real-time control systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, Janos

    1987-01-01

    Modern telerobot control technology requires the integration of symbolic and non-symbolic programming techniques, different models of parallel computations, and various programming paradigms. The Multigraph Architecture, which has been developed for the implementation of intelligent real-time control systems is described. The layered architecture includes specific computational models, integrated execution environment and various high-level tools. A special feature of the architecture is the tight coupling between the symbolic and non-symbolic computations. It supports not only a data interface, but also the integration of the control structures in a parallel computing environment.

  10. Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments

    NASA Astrophysics Data System (ADS)

    Kintsakis, Athanassios M.; Psomopoulos, Fotis E.; Symeonidis, Andreas L.; Mitkas, Pericles A.

    Hermes introduces a new "describe once, run anywhere" paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

  11. Assessing the professional development needs of experienced nurse executive leaders.

    PubMed

    Leach, Linda Searle; McFarland, Patricia

    2014-01-01

    The objective of this study was to identify the professional development topics that senior nurse leaders believe are important to their advancement and success. Senior/experienced nurse leaders at the executive level are able to influence the work environment of nurses and institutional and health policy. Their development needs are likely to reflect this and other contemporary healthcare issues and may be different from middle and frontline managers. A systematic way of assessing professional development needs for these nurse leaders is needed. A descriptive study using an online survey was distributed to a convenience sample of nurse leaders who were members of the Association of California Nurse Leaders (ACNL) or have participated in an ACNL program. Visionary leadership, leading complexity, and effective teams were the highest ranked leadership topics. Leading change, advancing health: The future of nursing, healthy work environments, and healthcare reform were also highly ranked topics. Executive-level nurse leaders are important to nurse retention, effective work environments, and leading change. Regular assessment and attention to the distinct professional development needs of executive-level nurse leaders are a valuable human capital investment.

  12. Concurrent Image Processing Executive (CIPE). Volume 3: User's guide

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.; Kong, Mih-Seh

    1990-01-01

    CIPE (the Concurrent Image Processing Executive) is both an executive which organizes the parameter inputs for hypercube applications and an environment which provides temporary data workspace and simple real-time function definition facilities for image analysis. CIPE provides two types of user interface. The Command Line Interface (CLI) provides a simple command-driven environment allowing interactive function definition and evaluation of algebraic expressions. The menu interface employs a hierarchical screen-oriented menu system where the user is led through a menu tree to any specific application and then given a formatted panel screen for parameter entry. How to initialize the system through the setup function, how to read data into CIPE symbols, how to manipulate and display data through the use of executive functions, and how to run an application in either user interface mode, are described.

  13. General-Purpose Electronic System Tests Aircraft

    NASA Technical Reports Server (NTRS)

    Glover, Richard D.

    1989-01-01

    Versatile digital equipment supports research, development, and maintenance. Extended aircraft interrogation and display system is general-purpose assembly of digital electronic equipment on ground for testing of digital electronic systems on advanced aircraft. Many advanced features, including multiple 16-bit microprocessors, pipeline data-flow architecture, advanced operating system, and resident software-development tools. Basic collection of software includes program for handling many types of data and for displays in various formats. User easily extends basic software library. Hardware and software interfaces to subsystems provided by user designed for flexibility in configuration to meet user's requirements.

  14. Eager protocol on a cache pipeline dataflow

    DOEpatents

    Ohmacht, Martin; Sugavanam, Krishnan

    2012-11-13

    A master device sends a request to communicate with a slave device to a switch. The master device waits for a period of cycles the switch takes to decide whether the master device can communicate with the slave device, and the master device sends data associated with the request to communicate at least after the period of cycles has passed since the master device sent the request to communicate to the switch without waiting to receive an acknowledgment from the switch that the master device can communicate with the slave device.

  15. Proceedings of the International Conference on Parallel Architectures and Compilation Techniques Held 24-26 August 1994 in Montreal, Canada

    DTIC Science & Technology

    1994-08-26

    an Itegrated Circuit Global Router. In Proc. of PPEARS 88, pages 138-145, 1988. [7] S. Sakai, Y. Yamaguchi, K. Hiraki , Y. Kodama, and T. Yuba. An...Computer Architecture, 1992. [5] S. Sakai, Y. Yamaguchi, K. Hiraki , Y. Kodama, and T. Yuba. An architecture of a data-flow single chip processor. In Int...EM-4 and sparing time for tech- nical discussions. We also thank Prof. Kei Hiraki at the Univ. of Tokyo for his helpful comments. Hidehiko Masuhara’s

  16. Simulation of economic agents interaction in a trade chain

    NASA Astrophysics Data System (ADS)

    Gimanova, I. A.; Dulesov, A. S.; Litvin, N. V.

    2017-01-01

    The mathematical model of economic agents interaction is offered in the work. It allowsconsidering the change of price and sales volumesin dynamics according to the process of purchase and sale in the single-product market of the trade and intermediary network. The description of data-flow processes is based on the use of the continuous dynamic market model. The application of ordinary differential equations during the simulation allows one to define areas of coefficients - characteristics of agents - and to investigate their interaction in a chain on stability.

  17. Contemporary nurse executive practice: one framework, one dozen cautions.

    PubMed

    Fralic, Maryann F

    2010-03-01

    How does today's nurse executive function effectively within an incredibly complex health care environment? Does it require different skills, new competencies, new behaviors? Can nurse executives, irrespective of setting, who have always been successful in the past, move forward with the same strategic and operational behaviors? Is there "new work" associated with a new context for executive practice? To answer these questions, this article considers key contemporary issues. Copyright 2010 Elsevier Inc. All rights reserved.

  18. VIPER: Virtual Intelligent Planetary Exploration Rover

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Flueckiger, Lorenzo; Nguyen, Laurent; Washington, Richard

    2001-01-01

    Simulation and visualization of rover behavior are critical capabilities for scientists and rover operators to construct, test, and validate plans for commanding a remote rover. The VIPER system links these capabilities. using a high-fidelity virtual-reality (VR) environment. a kinematically accurate simulator, and a flexible plan executive to allow users to simulate and visualize possible execution outcomes of a plan under development. This work is part of a larger vision of a science-centered rover control environment, where a scientist may inspect and explore the environment via VR tools, specify science goals, and visualize the expected and actual behavior of the remote rover. The VIPER system is constructed from three generic systems, linked together via a minimal amount of customization into the integrated system. The complete system points out the power of combining plan execution, simulation, and visualization for envisioning rover behavior; it also demonstrates the utility of developing generic technologies. which can be combined in novel and useful ways.

  19. Family matters: Intergenerational and interpersonal processes of executive function and attentive behavior

    PubMed Central

    Deater-Deckard, Kirby

    2014-01-01

    Individual differences in self-regulation include executive function (EF) components that serve self-regulation of attentive behavior by modulating reactive responses to the environment. These factors “run in families”. The purpose of this review is to summarize a program of research that addresses familial inter-generational transmission and inter-personal processes in development. Self-regulation of attentive behavior involves inter-related aspects of executive function (EF) including attention, inhibitory control, and working memory. Individual differences in EF skills develop in systematic ways over childhood, resulting in moderately stable differences between people by early adolescence. Through complex gene-environment transactions, EF is transmitted across generations within parent-child relationships that provide powerful socialization and experiential contexts in which EF and related attentive behavior are forged and practiced. Families matter as parents regulate home environments and themselves as best they can while also supporting cognitive self-regulation of attentive behavior in their children. PMID:25197171

  20. 41 CFR 102-79.10 - What basic assignment and utilization of space policy governs an Executive agency?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... must provide a quality workplace environment that supports program operations, preserves the value of... fitness facilities in the workplace when adequately justified. An Executive agency must promote maximum...

  1. Application driven interface generation for EASIE. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kao, Ya-Chen

    1992-01-01

    The Environment for Application Software Integration and Execution (EASIE) provides a user interface and a set of utility programs which support the rapid integration and execution of analysis programs about a central relational database. EASIE provides users with two basic modes of execution. One of them is a menu-driven execution mode, called Application-Driven Execution (ADE), which provides sufficient guidance to review data, select a menu action item, and execute an application program. The other mode of execution, called Complete Control Execution (CCE), provides an extended executive interface which allows in-depth control of the design process. Currently, the EASIE system is based on alphanumeric techniques only. It is the purpose of this project to extend the flexibility of the EASIE system in the ADE mode by implementing it in a window system. Secondly, a set of utilities will be developed to assist the experienced engineer in the generation of an ADE application.

  2. Java PathExplorer: A Runtime Verification Tool

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2001-01-01

    We describe recent work on designing an environment called Java PathExplorer for monitoring the execution of Java programs. This environment facilitates the testing of execution traces against high level specifications, including temporal logic formulae. In addition, it contains algorithms for detecting classical error patterns in concurrent programs, such as deadlocks and data races. An initial prototype of the tool has been applied to the executive module of the planetary Rover K9, developed at NASA Ames. In this paper we describe the background and motivation for the development of this tool, including comments on how it relates to formal methods tools as well as to traditional testing, and we then present the tool itself.

  3. Synthesizing information-update functions using off-line symbolic processing

    NASA Technical Reports Server (NTRS)

    Rosenschein, Stanley J.

    1990-01-01

    This paper explores the synthesis of programs that track dynamic conditions in their environment. An approach is proposed in which the designer specifies, in a declarative language, aspects of the environment in which the program will be embedded. This specification is then automatically compiled into a program that, when executed, updates internal data structures so as to maintain as an invariant a desired correspondence between internal data structures and states of the external environment. This approach retains much of the flexibility of declarative programming while guaranteeing a hard bound on the execution time of information-update functions.

  4. The use of emulator-based simulators for on-board software maintenance

    NASA Astrophysics Data System (ADS)

    Irvine, M. M.; Dartnell, A.

    2002-07-01

    Traditionally, onboard software maintenance activities within the space sector are performed using hardware-based facilities. These facilities are developed around the use of hardware emulation or breadboards containing target processors. Some sort of environment is provided around the hardware to support the maintenance actives. However, these environments are not easy to use to set-up the required test scenarios, particularly when the onboard software executes in a dynamic I/O environment, e.g. attitude control software, or data handling software. In addition, the hardware and/or environment may not support the test set-up required during investigations into software anomalies, e.g. raise spurious interrupt, fail memory, etc, and the overall "visibility" of the software executing may be limited. The Software Maintenance Simulator (SOMSIM) is a tool that can support the traditional maintenance facilities. The following list contains some of the main benefits that SOMSIM can provide: Low cost flexible extension to existing product - operational simulator containing software processor emulator; System-level high-fidelity test-bed in which software "executes"; Provides a high degree of control/configuration over the entire "system", including contingency conditions perhaps not possible with real hardware; High visibility and control over execution of emulated software. This paper describes the SOMSIM concept in more detail, and also describes the SOMSIM study being carried out for ESA/ESOC by VEGA IT GmbH.

  5. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and executemore » program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.« less

  6. 3 CFR - Long-Term Gulf Coast Restoration Support Plan

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... President (collectively, executive branch components). Specifically, I direct the following: Section 1. As..., science-based restoration of the ecosystem and environment, public health and safety efforts, and support... memorandum, executive branch components shall make available information and other resources, including...

  7. Hardware Assisted Stealthy Diversity (CHECKMATE)

    DTIC Science & Technology

    2013-09-01

    applicable across multiple architectures. Figure 29 shows an example an attack against an interpreted environment with a Java executable. CHECKMATE can...Architectures ARM PPCx86 Java VM Java VMJava VM Java Executable Attack APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED 33 a user executes “/usr/bin/wget...Server 1 - Administration Server 2 – Database ( mySQL ) Server 3 – Web server (Mongoose) Server 4 – File server (SSH) Server 5 – Email server

  8. Online SVT Commissioning and Monitoring using a Service-Oriented Architecture Framework

    NASA Astrophysics Data System (ADS)

    Ruger, Justin; Gotra, Yuri; Weygand, Dennis; Ziegler, Veronique; Heddle, David; Gore, David

    2014-03-01

    Silicon Vertex Tracker detectors are devices used in high energy experiments for precision measurement of charged tracks close to the collision point. Early detection of faulty hardware is essential and therefore code development of monitoring and commissioning software is essential. The computing framework for the CLAS12 experiment at Jefferson Lab is a service-oriented architecture that allows efficient data-flow from one service to another through loose coupling. I will present the strategy and development of services for the CLAS12 Silicon Tracker data monitoring and commissioning within this framework, as well as preliminary results using test data.

  9. The CSM testbed matrix processors internal logic and dataflow descriptions

    NASA Technical Reports Server (NTRS)

    Regelbrugge, Marc E.; Wright, Mary A.

    1988-01-01

    This report constitutes the final report for subtask 1 of Task 5 of NASA Contract NAS1-18444, Computational Structural Mechanics (CSM) Research. This report contains a detailed description of the coded workings of selected CSM Testbed matrix processors (i.e., TOPO, K, INV, SSOL) and of the arithmetic utility processor AUS. These processors and the current sparse matrix data structures are studied and documented. Items examined include: details of the data structures, interdependence of data structures, data-blocking logic in the data structures, processor data flow and architecture, and processor algorithmic logic flow.

  10. Requirements Specification Language (RSL) and supporting tools

    NASA Technical Reports Server (NTRS)

    Frincke, Deborah; Wolber, Dave; Fisher, Gene; Cohen, Gerald C.

    1992-01-01

    This document describes a general purpose Requirement Specification Language (RSL). RSL is a hybrid of features found in several popular requirement specification languages. The purpose of RSL is to describe precisely the external structure of a system comprised of hardware, software, and human processing elements. To overcome the deficiencies of informal specification languages, RSL includes facilities for mathematical specification. Two RSL interface tools are described. The Browser view contains a complete document with all details of the objects and operations. The Dataflow view is a specialized, operation-centered depiction of a specification that shows how specified operations relate in terms of inputs and outputs.

  11. Portable data flow in UNIX

    NASA Astrophysics Data System (ADS)

    Fox, R.; Molen, A. Vander; Hannuschke, S.

    1994-02-01

    We describe the dataflow of a nuclear physics data acquisition system. The system features a high speed active routing subsystem which allows an arbitrary number of data producers to contribute data to the system. Data are then routed to an arbitrary number of data consumers. Low overhead route-by-reference mechanisms are used to allow high rate operations. The system has been ported to a variety of UNIX systems. Timings are given for the routing component of the system on several systems. Finally, we give an example of a set of programs which can be added to the system to produce a complete data acquisition system.

  12. American Organization of Nurse Executives Care Innovation and Transformation program: improving care and practice environments.

    PubMed

    Oberlies, Amanda Stefancyk

    2014-09-01

    The American Organization of Nurse Executives conducted an evaluation of the hospitals participating in the Care Innovation and Transformation (CIT) program. A total of 24 hospitals participated in the 2-year CIT program from 2012 to 2013. Reported outcomes include increased patient satisfaction, decreased falls, and reductions in nurse turnover and overtime. Nurses reported statistically significant improvements in 4 domains of the principles and elements of a healthful practice environment developed by the Nursing Organizations Alliance.

  13. Identifying Executable Plans

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Jonsson, Ari K.; Frank, Jeremy D.; McGann, Conor

    2003-01-01

    Generating plans for execution imposes a different set of requirements on the planning process than those imposed by planning alone. In highly unpredictable execution environments, a fully-grounded plan may become inconsistent frequently when the world fails to behave as expected. Intelligent execution permits making decisions when the most up-to-date information is available, ensuring fewer failures. Planning should acknowledge the capabilities of the execution system, both to ensure robust execution in the face of uncertainty, which also relieves the planner of the burden of making premature commitments. We present Plan Identification Functions (PIFs), which formalize what it means for a plan to be executable, md are used in conjunction with a complete model of system behavior to halt the planning process when an executable plan is found. We describe the implementation of plan identification functions for a temporal, constraint-based planner. This particular implementation allows the description of many different plan identification functions. characteristics crf the xectieonfvii rnm-enft,h e best plan to hand to the execution system will contain more or less commitment and information.

  14. The President's Environmental Program, 1977.

    ERIC Educational Resources Information Center

    Council on Environmental Quality, Washington, DC.

    This government publication contains, in the order given, President Carter's Message on the Environment; a Fact Sheet explaining the background and details of the President's proposed legislation, Executive orders, and directives; the Executive orders themselves; and a brief explanation of the Administration position on the Clean Air Act…

  15. Creating system engineering products with executable models in a model-based engineering environment

    NASA Astrophysics Data System (ADS)

    Karban, Robert; Dekens, Frank G.; Herzig, Sebastian; Elaasar, Maged; Jankevičius, Nerijus

    2016-08-01

    Applying systems engineering across the life-cycle results in a number of products built from interdependent sources of information using different kinds of system level analysis. This paper focuses on leveraging the Executable System Engineering Method (ESEM) [1] [2], which automates requirements verification (e.g. power and mass budget margins and duration analysis of operational modes) using executable SysML [3] models. The particular value proposition is to integrate requirements, and executable behavior and performance models for certain types of system level analysis. The models are created with modeling patterns that involve structural, behavioral and parametric diagrams, and are managed by an open source Model Based Engineering Environment (named OpenMBEE [4]). This paper demonstrates how the ESEM is applied in conjunction with OpenMBEE to create key engineering products (e.g. operational concept document) for the Alignment and Phasing System (APS) within the Thirty Meter Telescope (TMT) project [5], which is under development by the TMT International Observatory (TIO) [5].

  16. Designing an Easy-to-use Executive Conference Room Control System

    NASA Astrophysics Data System (ADS)

    Back, Maribeth; Golovchinsky, Gene; Qvarfordt, Pernilla; van Melle, William; Boreczky, John; Dunnigan, Tony; Carter, Scott

    The Usable Smart Environment project (USE) aims at designing easy-to-use, highly functional, next-generation conference rooms. Our first design prototype focuses on creating a “no wizards” room for an American executive; that is, a room the executive could walk into and use by himself, without help from a technologist. A key idea in the USE framework is that customization is one of the best ways to create a smooth user experience. As the system needs to fit both with the personal leadership style of the executive and the corporation’s meeting culture, we began the design process by exploring the work flow in and around meetings attended by the executive.

  17. Ensuring a C2 Level of Trust and Interoperability in a Networked Windows NT Environment

    DTIC Science & Technology

    1996-09-01

    addition, it should be noted that the device drivers, microkernel , memory manager, and Hardware Abstraction Layer are all hardware dependent. a. The...Executive The executive is further divided into three conceptual layers which are referred to as-the Hardware Abstraction Layer (HAL), the Microkernel , and...Subsystem Executive Subsystems Manager I/O Manager Cache Manager File Systems Microkernel Device Driver Hardware Abstraction Layer F HARDWARE Figure 3

  18. Contribution of Family Environment to Pediatric Cochlear Implant Users' Speech and Language Outcomes: Some Preliminary Findings

    ERIC Educational Resources Information Center

    Holt, Rachael Frush; Beer, Jessica; Kronenberger, William G.; Pisoni, David B.; Lalonde, Kaylah

    2012-01-01

    Purpose: To evaluate the family environments of children with cochlear implants and to examine relationships between family environment and postimplant language development and executive function. Method: Forty-five families of children with cochlear implants completed a self-report family environment questionnaire (Family Environment Scale-Fourth…

  19. Shyness and Vocabulary: The Roles of Executive Functioning and Home Environmental Stimulation

    PubMed Central

    Nayena Blankson, A.; O’Brien, Marion; Leerkes, Esther M.; Marcovitch, Stuart; Calkins, Susan D.

    2010-01-01

    Although shyness has often been found to be negatively related to vocabulary, few studies have examined the processes that produce or modify this relation. The present study examined executive functioning skills and home environmental stimulation as potential mediating and moderating mechanisms. A sample of 3.5-year-old children (N=254) were administered executive functioning tasks and a vocabulary test during a laboratory visit. Mothers completed questionnaires assessing child shyness and home environmental stimulation. Our primary hypothesis was that executive functioning mediates the association between shyness and vocabulary, and home environmental stimulation moderates the relation between executive functioning and vocabulary. Alternative hypotheses were also tested. Results indicated that children with better executive functioning skills developed stronger vocabularies when reared in more, versus less, stimulating environments. Implications of these results are discussed in terms of the role of shyness, executive functioning, and home environmental stimulation in early vocabulary development. PMID:22096267

  20. Implementation of and Ada real-time executive: A case study

    NASA Technical Reports Server (NTRS)

    Laird, James D.; Burton, Bruce A.; Koppes, Mary R.

    1986-01-01

    Current Ada language implementations and runtime environments are immature, unproven and are a key risk area for real-time embedded computer system (ECS). A test-case environment is provided in which the concerns of the real-time, ECS community are addressed. A priority driven executive is selected to be implemented in the Ada programming language. The model selected is representative of real-time executives tailored for embedded systems used missile, spacecraft, and avionics applications. An Ada-based design methodology is utilized, and two designs are considered. The first of these designs requires the use of vendor supplied runtime and tasking support. An alternative high-level design is also considered for an implementation requiring no vendor supplied runtime or tasking support. The former approach is carried through to implementation.

  1. Using Planning, Scheduling and Execution for Autonomous Mars Rover Operations

    NASA Technical Reports Server (NTRS)

    Estlin, Tara A.; Gaines, Daniel M.; Chouinard, Caroline M.; Fisher, Forest W.; Castano, Rebecca; Judd, Michele J.; Nesnas, Issa A.

    2006-01-01

    With each new rover mission to Mars, rovers are traveling significantly longer distances. This distance increase raises not only the opportunities for science data collection, but also amplifies the amount of environment and rover state uncertainty that must be handled in rover operations. This paper describes how planning, scheduling and execution techniques can be used onboard a rover to autonomously generate and execute rover activities and in particular to handle new science opportunities that have been identified dynamically. We also discuss some of the particular challenges we face in supporting autonomous rover decision-making. These include interaction with rover navigation and path-planning software and handling large amounts of uncertainty in state and resource estimations. Finally, we describe our experiences in testing this work using several Mars rover prototypes in a realistic environment.

  2. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  3. How an Active Learning Classroom Transformed IT Executive Management

    ERIC Educational Resources Information Center

    Connolly, Amy; Lampe, Michael

    2016-01-01

    This article describes how our university built a unique classroom environment specifically for active learning. This classroom changed students' experience in the undergraduate executive information technology (IT) management class. Every college graduate should learn to think critically, solve problems, and communicate solutions, but 90% of…

  4. Family Environments and Children's Executive Function: The Mediating Role of Children's Affective State and Stress.

    PubMed

    He, Zhong-Hua; Yin, Wen-Gang

    2016-09-01

    There is increasing evidence that inadequate family environments (family material environment and family psychosocial environment) are not only social problems but also factors contributing to adverse neurocognitive outcomes. In the present study, the authors investigated the relationship among family environments, children's naturalistic affective state, self-reported stress, and executive functions in a sample of 157 Chinese families. These findings revealed that in inadequate family material environments, reduced children's cognitive flexibility is associated with increased naturalistic negative affectivity and self-reported stress. In addition, naturalistic negative affectivity mediated the association between family expressiveness and children's cognitive flexibility. The authors used a structural equation model to examine the mediation model hypothesis, and the results confirmed the mediating roles of naturalistic negative affectivity and self-reported stress between family environments and the cognitive flexibility of Chinese children. These findings indicate the importance of reducing stress and negative emotional state for improving cognitive functions in children of low socioeconomic status.

  5. Enabling Flexible and Continuous Capability Invocation in Mobile Prosumer Environments

    PubMed Central

    Alcarria, Ramon; Robles, Tomas; Morales, Augusto; López-de-Ipiña, Diego; Aguilera, Unai

    2012-01-01

    Mobile prosumer environments require the communication with heterogeneous devices during the execution of mobile services. These environments integrate sensors, actuators and smart devices, whose availability continuously changes. The aim of this paper is to design a reference architecture for implementing a model for continuous service execution and access to capabilities, i.e., the functionalities provided by these devices. The defined architecture follows a set of software engineering patterns and includes some communication paradigms to cope with the heterogeneity of sensors, actuators, controllers and other devices in the environment. In addition, we stress the importance of the flexibility in capability invocation by allowing the communication middleware to select the access technology and change the communication paradigm when dealing with smart devices, and by describing and evaluating two algorithms for resource access management. PMID:23012526

  6. 32 CFR 643.28 - Policy-Historic and cultural environment.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 4 2013-07-01 2013-07-01 false Policy-Historic and cultural environment. 643.28... PROPERTY REAL ESTATE Policy § 643.28 Policy—Historic and cultural environment. (a) Executive Order 11593... leadership in preserving, restoring and maintaining the historic and cultural environment of the Nation; that...

  7. 32 CFR 643.28 - Policy-Historic and cultural environment.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 4 2012-07-01 2011-07-01 true Policy-Historic and cultural environment. 643.28... PROPERTY REAL ESTATE Policy § 643.28 Policy—Historic and cultural environment. (a) Executive Order 11593... leadership in preserving, restoring and maintaining the historic and cultural environment of the Nation; that...

  8. 32 CFR 643.28 - Policy-Historic and cultural environment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 4 2014-07-01 2013-07-01 true Policy-Historic and cultural environment. 643.28... PROPERTY REAL ESTATE Policy § 643.28 Policy—Historic and cultural environment. (a) Executive Order 11593... leadership in preserving, restoring and maintaining the historic and cultural environment of the Nation; that...

  9. 32 CFR 643.28 - Policy-Historic and cultural environment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 4 2011-07-01 2011-07-01 false Policy-Historic and cultural environment. 643.28... PROPERTY REAL ESTATE Policy § 643.28 Policy—Historic and cultural environment. (a) Executive Order 11593... leadership in preserving, restoring and maintaining the historic and cultural environment of the Nation; that...

  10. 32 CFR 643.28 - Policy-Historic and cultural environment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Policy-Historic and cultural environment. 643.28... PROPERTY REAL ESTATE Policy § 643.28 Policy—Historic and cultural environment. (a) Executive Order 11593... leadership in preserving, restoring and maintaining the historic and cultural environment of the Nation; that...

  11. Reducing acquisition risk through integrated systems of systems engineering

    NASA Astrophysics Data System (ADS)

    Gross, Andrew; Hobson, Brian; Bouwens, Christina

    2016-05-01

    In the fall of 2015, the Joint Staff J7 (JS J7) sponsored the Bold Quest (BQ) 15.2 event and conducted planning and coordination to combine this event into a joint event with the Army Warfighting Assessment (AWA) 16.1 sponsored by the U.S. Army. This multipurpose event combined a Joint/Coalition exercise (JS J7) with components of testing, training, and experimentation required by the Army. In support of Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ASA(ALT)) System of Systems Engineering and Integration (SoSE&I), Always On-On Demand (AO-OD) used a system of systems (SoS) engineering approach to develop a live, virtual, constructive distributed environment (LVC-DE) to support risk mitigation utilizing this complex and challenging exercise environment for a system preparing to enter limited user test (LUT). AO-OD executed a requirements-based SoS engineering process starting with user needs and objectives from Army Integrated Air and Missile Defense (AIAMD), Patriot units, Coalition Intelligence, Surveillance and Reconnaissance (CISR), Focused End State 4 (FES4) Mission Command (MC) Interoperability with Unified Action Partners (UAP), and Mission Partner Environment (MPE) Integration and Training, Tactics and Procedures (TTP) assessment. The SoS engineering process decomposed the common operational, analytical, and technical requirements, while utilizing the Institute of Electrical and Electronics Engineers (IEEE) Distributed Simulation Engineering and Execution Process (DSEEP) to provide structured accountability for the integration and execution of the AO-OD LVC-DE. As a result of this process implementation, AO-OD successfully planned for, prepared, and executed a distributed simulation support environment that responsively satisfied user needs and objectives, demonstrating the viability of an LVC-DE environment to support multiple user objectives and support risk mitigation activities for systems in the acquisition process.

  12. Autism Spectrum Disorder and intact executive functioning.

    PubMed

    Ferrara, R; Ansermet, F; Massoni, F; Petrone, L; Onofri, E; Ricci, P; Archer, T; Ricci, S

    2016-01-01

    Earliest notions concerning autism (Autism Spectrum Disorders, ASD) describe the disturbance in executive functioning. Despite altered definition, executive functioning, expressed as higher cognitive skills required complex behaviors linked to the prefrontal cortex, are defective in autism. Specific difficulties in children presenting autism or verbal disabilities at executive functioning levels have been identified. Nevertheless, the developmental deficit of executive functioning in autism is highly diversified with huge individual variation and may even be absent. The aim of the present study to examine the current standing of intact executive functioning intact in ASD. Analysis of ASD populations, whether high-functioning, Asperger's or autism Broad Phenotype, studied over a range of executive functions including response inhibition, planning, cognitive flexibility, cognitive inhibition, and alerting networks indicates an absence of damage/impairment compared to the typically-developed normal control subjects. These findings of intact executive functioning in ASD subjects provide a strong foundation on which to construct applications for growth environments and the rehabilitation of autistic subjects.

  13. [The role of university hospital executive board members].

    PubMed

    Debatin, J F; Rehr, J

    2009-09-01

    Demographic changes and medical progress in combination with vastly altered regulatory and economic environments have forced considerable change in the structure of German university hospitals in recent years. These changes have affected medical care as well as research and medical school training. To allow for more flexibility and a higher level of reactivity to the changing environment German university hospitals were transferred from state agencies to independent corporate structures. All but one remains wholly owned by the respective state governments. The governing structure of these independent medical hospitals consists of an executive board, generally made up of a medical director, a financial director, a director for nursing, and the dean of the medical faculty. In most hospitals, the medical director serves as chief executive officer. The regulations governing the composition and responsibility of the members of the executive board differ from state to state. These differences do affect to some degree the interactive effectiveness of the members of the executive boards. Modalities that stress the overall responsibility for all board members seem to work better than those that define clear portfolio limits. Even more than organizational and regulatory differences, the effectiveness of the work of the executive boards is influenced by the personality of the board members themselves. Success appears to be a clear function of the willingness of all members to work together.

  14. The Effect of Early Deprivation on Executive Attention in Middle Childhood

    ERIC Educational Resources Information Center

    Loman, Michelle M.; Johnson, Anna E.; Westerlund, Alissa; Pollak, Seth D.; Nelson, Charles A.; Gunnar, Megan R.

    2013-01-01

    Background: Children reared in deprived environments, such as institutions for the care of orphaned or abandoned children, are at increased risk for attention and behavior regulation difficulties. This study examined the neurobehavioral correlates of executive attention in post institutionalized (PI) children. Methods: The performance and…

  15. Leadership and Team Dynamics in Senior Executive Leadership Teams

    ERIC Educational Resources Information Center

    Barnett, Kerry; McCormick, John

    2012-01-01

    As secondary school environments become increasingly complex, shifts are occurring in the way leadership is being practised. New leadership practices emphasize shared or distributed leadership. A senior executive leadership team with responsibility for school leadership is likely to be one of the many, varied forms of new leadership practices…

  16. Putting time into proof outlines

    NASA Technical Reports Server (NTRS)

    Schneider, Fred B.; Bloom, Bard; Marzullo, Keith

    1991-01-01

    A logic for reasoning about timing of concurrent programs is presented. The logic is based on proof outlines and can handle maximal parallelism as well as resource-constrained execution environments. The correctness proof for a mutual exclusion protocol that uses execution timings in a subtle way illustrates the logic in action.

  17. A Visual Database System for Image Analysis on Parallel Computers and its Application to the EOS Amazon Project

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.

    1996-01-01

    The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.

  18. CASPER Version 2.0

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Rabideau, Gregg; Tran, Daniel; Knight, Russell; Chouinard, Caroline; Estlin, Tara; Gaines, Daniel; Clement, Bradley; Barrett, Anthony

    2007-01-01

    CASPER is designed to perform automated planning of interdependent activities within a system subject to requirements, constraints, and limitations on resources. In contradistinction to the traditional concept of batch planning followed by execution, CASPER implements a concept of continuous planning and replanning in response to unanticipated changes (including failures), integrated with execution. Improvements over other, similar software that have been incorporated into CASPER version 2.0 include an enhanced executable interface to facilitate integration with a wide range of execution software systems and supporting software libraries; features to support execution while reasoning about urgency, importance, and impending deadlines; features that enable accommodation to a wide range of computing environments that include various central processing units and random- access-memory capacities; and improved generic time-server and time-control features.

  19. Executive and Perceptual Distraction in Visual Working Memory

    PubMed Central

    2017-01-01

    The contents of visual working memory are likely to reflect the influence of both executive control resources and information present in the environment. We investigated whether executive attention is critical in the ability to exclude unwanted stimuli by introducing concurrent potentially distracting irrelevant items to a visual working memory paradigm, and manipulating executive load using simple or more demanding secondary verbal tasks. Across 7 experiments varying in presentation format, timing, stimulus set, and distractor number, we observed clear disruptive effects of executive load and visual distraction, but relatively minimal evidence supporting an interactive relationship between these factors. These findings are in line with recent evidence using delay-based interference, and suggest that different forms of attentional selection operate relatively independently in visual working memory. PMID:28414499

  20. Autonomous mission management for UAVs using soar intelligent agents

    NASA Astrophysics Data System (ADS)

    Gunetti, Paolo; Thompson, Haydn; Dodd, Tony

    2013-05-01

    State-of-the-art unmanned aerial vehicles (UAVs) are typically able to autonomously execute a pre-planned mission. However, UAVs usually fly in a very dynamic environment which requires dynamic changes to the flight plan; this mission management activity is usually tasked to human supervision. Within this article, a software system that autonomously accomplishes the mission management task for a UAV will be proposed. The system is based on a set of theoretical concepts which allow the description of a flight plan and implemented using a combination of Soar intelligent agents and traditional control techniques. The system is capable of automatically generating and then executing an entire flight plan after being assigned a set of objectives. This article thoroughly describes all system components and then presents the results of tests that were executed using a realistic simulation environment.

  1. Cooperative mission execution and planning

    NASA Astrophysics Data System (ADS)

    Flann, Nicholas S.; Saunders, Kevin S.; Pells, Larry

    1998-08-01

    Utilizing multiple cooperating autonomous vehicles to perform tasks enhances robustness and efficiency over the use of a single vehicle. Furthermore, because autonomous vehicles can be controlled precisely and their status known accurately in real time, new types of cooperative behaviors are possible. This paper presents a working system called MEPS that plans and executes missions for multiple autonomous vehicles in large structured environments. Two generic spatial tasks are supported, to sweep an area and to visit a location while activating on-board equipment. Tasks can be entered both initially by the user and dynamically during mission execution by both users and vehicles. Sensor data and task achievement data is shared among the vehicles enabling them to cooperatively adapt to changing environmental, vehicle and tasks conditions. The system has been successfully applied to control ATV and micro-robotic vehicles in precision agriculture and waste-site characterization environments.

  2. Method for resource control in parallel environments using program organization and run-time support

    NASA Technical Reports Server (NTRS)

    Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)

    2001-01-01

    A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.

  3. Method for resource control in parallel environments using program organization and run-time support

    NASA Technical Reports Server (NTRS)

    Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)

    1999-01-01

    A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.

  4. Retooling the nurse executive for 21st century practice: decision support systems.

    PubMed

    Fralic, M F; Denby, C B

    2000-01-01

    Health care financing and care delivery systems are changing at almost warp speed. This requires new responses and new capabilities from contemporary nurse executives and calls for new approaches to the preparation of the next generation of nursing leaders. The premise of this article is that, in these highly unstable environments, the nurse executive faces the need to make high-impact decisions in relatively short time frames. A standardized process for objective decision making becomes essential. This article describes that process.

  5. Mal-Xtract: Hidden Code Extraction using Memory Analysis

    NASA Astrophysics Data System (ADS)

    Lim, Charles; Syailendra Kotualubun, Yohanes; Suryadi; Ramli, Kalamullah

    2017-01-01

    Software packer has been used effectively to hide the original code inside a binary executable, making it more difficult for existing signature based anti malware software to detect malicious code inside the executable. A new method of written and rewritten memory section is introduced to to detect the exact end time of unpacking routine and extract original code from packed binary executable using Memory Analysis running in an software emulated environment. Our experiment results show that at least 97% of the original code from the various binary executable packed with different software packers could be extracted. The proposed method has also been successfully extracted hidden code from recent malware family samples.

  6. Data Grid Management Systems

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne

    2004-01-01

    The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.

  7. SEPAC flight software detailed design specifications, volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The detailed design specifications (as built) for the SEPAC Flight Software are defined. The design includes a description of the total software system and of each individual module within the system. The design specifications describe the decomposition of the software system into its major components. The system structure is expressed in the following forms: the control-flow hierarchy of the system, the data-flow structure of the system, the task hierarchy, the memory structure, and the software to hardware configuration mapping. The component design description includes details on the following elements: register conventions, module (subroutines) invocaton, module functions, interrupt servicing, data definitions, and database structure.

  8. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707

  9. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    PubMed

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.

  10. 3 CFR 13610 - Executive Order 13610 of May 10, 2012. Identifying and Reducing Regulatory Burdens

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., welfare, safety, and our environment, but they can also impose significant burdens and costs. During... in light of changed circumstances, including the rise of new technologies. Executive Order 13563 of.... Significantly larger savings are anticipated as the plans are implemented and as action is taken on additional...

  11. Hands On, Minds On: How Executive Function, Motor, and Spatial Skills Foster School Readiness

    ERIC Educational Resources Information Center

    Cameron, Claire E.

    2018-01-01

    A growing body of research indicates that three foundational cognitive skills--executive function, motor skills, and spatial skills--form the basis for children to make a strong academic, behavioral, and social transition to formal school. Given inequitable early learning environments or "opportunity gaps" in the United States, these…

  12. Language Implications for Advertising in International Markets: A Model for Message Content and Message Execution.

    ERIC Educational Resources Information Center

    Beard, John; Yaprak, Attila

    A content analysis model for assessing advertising themes and messages generated primarily for United States markets to overcome barriers in the cultural environment of international markets was developed and tested. The model is based on three primary categories for generating, evaluating, and executing advertisements: rational, emotional, and…

  13. Selection at the Top: An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Sessa, Valerie I.; Campbell, Richard J.

    In this era of rapidly changing organizational environments, the task of executive selection is critical. Practitioners clearly need help with such essential questions as: What does it mean to be successful in today's organizations? How can we select executives who are more likely to perform successfully in them? This book seeks to address those…

  14. The procedure execution manager and its application to Advanced Photon Source operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.

    1997-06-01

    The Procedure Execution Manager (PEM) combines a complete scripting environment for coding accelerator operation procedures with a manager application for executing and monitoring the procedures. PEM is based on Tcl/Tk, a supporting widget library, and the dp-tcl extension for distributed processing. The scripting environment provides support for distributed, parallel execution of procedures along with join and abort operations. Nesting of procedures is supported, permitting the same code to run as a top-level procedure under operator control or as a subroutine under control of another procedure. The manager application allows an operator to execute one or more procedures in automatic, semi-automatic,more » or manual modes. It also provides a standard way for operators to interact with procedures. A number of successful applications of PEM to accelerator operations have been made to date. These include start-up, shutdown, and other control of the positron accumulator ring (PAR), low-energy transport (LET) lines, and the booster rf systems. The PAR/LET procedures make nested use of PEM`s ability to run parallel procedures. There are also a number of procedures to guide and assist tune-up operations, to make accelerator physics measurements, and to diagnose equipment. Because of the success of the existing procedures, expanded use of PEM is planned.« less

  15. FOX: A Fault-Oblivious Extreme-Scale Execution Environment Boston University Final Report Project Number: DE-SC0005365

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appavoo, Jonathan

    Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. The FOX project explored systems software and runtime support for a new approach to the data and work distribution for fault oblivious application execution. Our major OS work at Boston University focusedmore » on developing a new light-weight operating systems model that provides an appropriate context for both multi-core and multi-node application development. This work is discussed in section 1. Early on in the FOX project BU developed infrastructure for prototyping dynamic HPC environments in which the sets of nodes that an application is run on can be dynamically grown or shrunk. This work was an extension of the Kittyhawk project and is discussed in section 2. Section 3 documents the publications and software repositories that we have produced. To put our work in context of the complete FOX project contribution we include in section 4 an extended version of a paper that documents the complete work of the FOX team.« less

  16. Workflows and Provenance: Toward Information Science Solutions for the Natural Sciences.

    PubMed

    Gryk, Michael R; Ludäscher, Bertram

    2017-01-01

    The era of big data and ubiquitous computation has brought with it concerns about ensuring reproducibility in this new research environment. It is easy to assume computational methods self-document by their very nature of being exact, deterministic processes. However, similar to laboratory experiments, ensuring reproducibility in the computational realm requires the documentation of both the protocols used (workflows) as well as a detailed description of the computational environment: algorithms, implementations, software environments as well as the data ingested and execution logs of the computation. These two aspects of computational reproducibility (workflows and execution details) are discussed in the context of biomolecular Nuclear Magnetic Resonance spectroscopy (bioNMR) as well as the PRIMAD model for computational reproducibility.

  17. Energy and Environment Guide to Action- Executive Summary

    EPA Pesticide Factsheets

    Summarizes the key messages and purpose of the Energy and Environment Guide to Action, which describes the latest best practices and opportunities that states are using to invest in energy efficiency, renewable energy, and CHP.

  18. Evaluating transformational leadership skills of hospice executives.

    PubMed

    Longenecker, Paul D

    2006-01-01

    Health care is a rapidly changing environment requiring a high level of leadership skills by executive level personnel. The hospice industry is experiencing the same rapid changes; however, the changes have been experienced over the brief span of 25 years. Highly skilled hospice executives are a necessity for the growth and long-term survival of hospice care. This descriptive study was conducted to evaluate the leadership skills of hospice executives. The study population consisted of hospice executives who were members of the state hospice organization in Ohio and/or licensed by the state (88 hospice providers). Three questionnaires were utilized for collecting data. These questionnaires collected data on transformational leadership skills of participants, participants' personal demographics, and their employer's organizational demographics. Forty-seven hospice executives responded (53%). Key findings reported were high levels of transformational leadership skills (mean, 3.39), increased use of laissez-faire skills with years of hospice experience (P = .57), and positive reward being a frequent leadership technique utilized (mean, 3.29). In addition, this was the first study of leadership skills of hospice executives and the first formal collection of personal demographic data about hospice executives.

  19. Executive control systems in the engineering design environment

    NASA Technical Reports Server (NTRS)

    Hurst, P. W.; Pratt, T. W.

    1985-01-01

    Executive Control Systems (ECSs) are software structures for the unification of various engineering design application programs into comprehensive systems with a central user interface (uniform access) method and a data management facility. Attention is presently given to the most significant determinations of a research program conducted for 24 ECSs, used in government and industry engineering design environments to integrate CAD/CAE applications programs. Characterizations are given for the systems' major architectural components and the alternative design approaches considered in their development. Attention is given to ECS development prospects in the areas of interdisciplinary usage, standardization, knowledge utilization, and computer science technology transfer.

  20. Nurse executives' values and leadership behaviors. Conflict or coexistence?

    PubMed

    Perkel, Linda K

    2002-01-01

    Nurse leaders struggle to provide for the delivery of humanistic and holistic healthcare that is consistent with nursing values in a changing economic environment. There is concern that nurse executives find it increasingly difficult to reconcile the differences between organizational economics and their personal and professional identities. The purpose of this study was to examine the relationship between nurse executives' perceived personal and organizational value congruence and their leadership behaviors (i.e., transformational, transactional, and laissez-faire). Four hundred and eleven nurse executives employed by American Hospital Association hospitals located east of the Mississippi participated in the study. Findings provide insight into the values held by nurse executives, personal and organizational value congruence and conflict perceived by nurse executives, and the leadership behaviors used by nurse executives. For example, the findings indicate there is a moderate degree of value congruence between nurse executives' personal and organizational values; however, the degree to which specific values are important is significantly different. Nurse executives report that they most often engage in transformational leadership behaviors, but there was no relationship between their leadership behavior and the degree of personal and organizational value congruence. Implications for nursing and nursing research are discussed.

  1. Parenting style is related to executive dysfunction after brain injury in children.

    PubMed

    Potter, Jennifer L; Wade, Shari L; Walz, Nicolay C; Cassedy, Amy; Stevens, M Hank; Yeates, Keith O; Taylor, H Gerry

    2011-11-01

    The goal of this study was to examine how parenting style (authoritarian, authoritative, permissive) and family functioning are related to behavioral aspects of executive function following traumatic brain injury (TBI) in young children. Participants included 75 children with TBI and 97 children with orthopedic injuries (OI), ages 3-7 years at injury. Pre-injury parenting behavior and family functioning were assessed shortly after injury, and postinjury executive functions were assessed using the Behavior Rating Inventory of Executive Functioning (BRIEF; Gioia & Isquith, 2004) at 6, 12, and 18 months postinjury. Mixed model analyses, using pre-injury executive functioning (assessed by the BRIEF at baseline) as a covariate, examined the relationship of parenting style and family characteristics to executive functioning in children with moderate and severe TBI compared to OI. Among children with moderate TBI, higher levels of authoritarian parenting were associated with greater executive difficulties at 12 and 18 months following injury. Permissive and authoritative parenting styles were not significantly associated with postinjury executive skills. Finally, fewer family resources predicted more executive deficits across all of the groups, regardless of injury type. These findings provide additional evidence regarding the role of the social and familial environment in emerging behavior problems following childhood TBI.

  2. ScyFlow: An Environment for the Visual Specification and Execution of Scientific Workflows

    NASA Technical Reports Server (NTRS)

    McCann, Karen M.; Yarrow, Maurice; DeVivo, Adrian; Mehrotra, Piyush

    2004-01-01

    With the advent of grid technologies, scientists and engineers are building more and more complex applications to utilize distributed grid resources. The core grid services provide a path for accessing and utilizing these resources in a secure and seamless fashion. However what the scientists need is an environment that will allow them to specify their application runs at a high organizational level, and then support efficient execution across any given set or sets of resources. We have been designing and implementing ScyFlow, a dual-interface architecture (both GUT and APT) that addresses this problem. The scientist/user specifies the application tasks along with the necessary control and data flow, and monitors and manages the execution of the resulting workflow across the distributed resources. In this paper, we utilize two scenarios to provide the details of the two modules of the project, the visual editor and the runtime workflow engine.

  3. System architecture for asynchronous multi-processor robotic control system

    NASA Technical Reports Server (NTRS)

    Steele, Robert D.; Long, Mark; Backes, Paul

    1993-01-01

    The architecture for the Modular Telerobot Task Execution System (MOTES) as implemented in the Supervisory Telerobotics (STELER) Laboratory is described. MOTES is the software component of the remote site of a local-remote telerobotic system which is being developed for NASA for space applications, in particular Space Station Freedom applications. The system is being developed to provide control and supervised autonomous control to support both space based operation and ground-remote control with time delay. The local-remote architecture places task planning responsibilities at the local site and task execution responsibilities at the remote site. This separation allows the remote site to be designed to optimize task execution capability within a limited computational environment such as is expected in flight systems. The local site task planning system could be placed on the ground where few computational limitations are expected. MOTES is written in the Ada programming language for a multiprocessor environment.

  4. Acceptance and Use of Lecture Capture System (LCS) in Executive Business Studies: Extending UTAUT2

    ERIC Educational Resources Information Center

    Farooq, Muhammad Shoaib; Salam, Maimoona; Jaafar, Norizan; Fayolle, Alain; Ayupp, Kartinah; Radovic-Markovic, Mirjana; Sajid, Ali

    2017-01-01

    Purpose: Adoption of latest technological advancements (e.g. lecture capture system) is a hallmark of market-driven private universities. Among many other distinguishing features, lecture capture system (LCS) is the one which is being offered to enhance the flexibility of learning environment for attracting executive business students. Majority of…

  5. Predictors of Behavioral Regulation in Kindergarten: Household Chaos, Parenting, and Early Executive Functions

    ERIC Educational Resources Information Center

    Vernon-Feagans, Lynne; Garrett-Peters, Patricia; Willoughby, Michael

    2016-01-01

    Behavioral regulation is an important school readiness skill that has been linked to early executive function (EF) and later success in learning and school achievement. Although poverty and related risks, as well as negative parenting, have been associated with poorer EF and behavioral regulation, chaotic home environments may also play a role in…

  6. The Turnaround Mindset: Aligning Leadership for Student Success

    ERIC Educational Resources Information Center

    Fairchild, Tierney Temple; DeMary, Jo Lynne

    2011-01-01

    This book provides a valuable balance between what one must know and what one must do to turn around low-performing schools. The 3-E framework simplifies this complex process by focusing resources on the environment, the executive, and the execution of the turnaround plan. Central to each of these components is a spotlight on the values supporting…

  7. Examining Executive Function in the Second Year of Life: Coherence, Stability, and Relations to Joint Attention and Language

    ERIC Educational Resources Information Center

    Miller, Stephanie E.; Marcovitch, Stuart

    2015-01-01

    Several theories of executive function (EF) propose that EF development corresponds to children's ability to form representations and reflect on represented stimuli in the environment. However, research on early EF is primarily conducted with preschoolers, despite the fact that important developments in representation (e.g., language, gesture,…

  8. General Temporal Knowledge for Planning and Data Mining

    NASA Technical Reports Server (NTRS)

    Morris, Robert; Khatib, Lina

    2001-01-01

    We consider the architecture of systems that combine temporal planning and plan execution and introduce a layer of temporal reasoning that potential1y improves both the communication between humans and such systems, and the performance of the temporal planner itself. In particular, this additional layer simultaneously supports more flexibility in specifying and maintaining temporal constraints on plans within an uncertain and changing execution environment, and the ability to understand and trace the progress of plan execution. It is shown how a representation based on single set of abstractions of temporal information can be used to characterize the reasoning underlying plan generation and execution interpretation. The complexity of such reasoning is discussed.

  9. The moderating effect of ANKK1 on the association of family environment with longitudinal executive function following traumatic brain injury in early childhood: A preliminary study.

    PubMed

    Smith-Paine, Julia; Wade, Shari L; Treble-Barna, Amery; Zhang, Nanhua; Zang, Huaiyu; Martin, Lisa J; Yeates, Keith Owen; Taylor, H Gerry; Kurowski, Brad G

    2018-05-02

    This study examined whether the ankyrin repeat and kinase domain containing 1 gene (ANKK1) C/T single-nucleotide polymorphism (SNP) rs1800497 moderated the association of family environment with long-term executive function (EF) following traumatic injury in early childhood. Caregivers of children with traumatic brain injury (TBI) and children with orthopedic injury (OI) completed the Behavior Rating Inventory of Executive Function (BRIEF) at post injury visits. DNA was collected to identify the rs1800497 genotype in the ANKK1 gene. General linear models examined gene-environment interactions as moderators of the effects of TBI on EF at two times post injury (12 months and 7 years). At 12 months post injury, analyses revealed a significant 3-way interaction of genotype with level of permissive parenting and injury type. Post-hoc analyses showed genetic effects were more pronounced for children with TBI from more positive family environments, such that children with TBI who were carriers of the risk allele (T-allele) had significantly poorer EF compared to non-carriers only when they were from more advantaged environments. At 7 years post injury, analyses revealed a significant 2-way interaction of genotype with level of authoritarian parenting. Post-hoc analyses found that carriers of the risk allele had significantly poorer EF compared to non-carriers only when they were from more advantaged environments. These results suggest a gene-environment interaction involving the ANKK1 gene as a predictor of EF in a pediatric injury population. The findings highlight the importance of considering environmental influences in future genetic studies on recovery following TBI and other traumatic injuries in childhood.

  10. 10 CFR Appendix A to Subpart B of... - General Statement of Safety Basis Policy

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... with DOE Policy 450.2A, “Identifying, Implementing and Complying with Environment, Safety and Health..., safety, and health into work planning and execution (48 CFR 970.5223-1, Integration of Environment...) Using the method in DOE-STD-1120-98, Integration of Environment, Safety, and Health into Facility...

  11. 10 CFR Appendix A to Subpart B of... - General Statement of Safety Basis Policy

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... with DOE Policy 450.2A, “Identifying, Implementing and Complying with Environment, Safety and Health..., safety, and health into work planning and execution (48 CFR 970.5223-1, Integration of Environment...) Using the method in DOE-STD-1120-98, Integration of Environment, Safety, and Health into Facility...

  12. 10 CFR Appendix A to Subpart B of... - General Statement of Safety Basis Policy

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... with DOE Policy 450.2A, “Identifying, Implementing and Complying with Environment, Safety and Health..., safety, and health into work planning and execution (48 CFR 970.5223-1, Integration of Environment...) Using the method in DOE-STD-1120-98, Integration of Environment, Safety, and Health into Facility...

  13. JGOMAS: New Approach to AI Teaching

    ERIC Educational Resources Information Center

    Barella, A.; Valero, S.; Carrascosa, C.

    2009-01-01

    This paper presents a new environment for teaching practical work in AI subjects. The main purpose of this environment is to make AI techniques more appealing to students and to facilitate the use of the toolkits which are currently widely used in research and development. This new environment has a toolkit for developing and executing agents,…

  14. Positioning marketing in the hospital's power structure.

    PubMed

    Beckham, D

    1984-08-01

    Although hospitals are increasingly recognizing the importance of marketing, many have difficulty assimilating what has been primarily an industrial concern into a health care environment. The author explains the function of marketing in health care, the outlook and expectations of a good marketing executive, and why hospital management and the medical staff may have difficulty accepting marketing and the expectations of the marketing executive.

  15. FPGA based charge acquisition algorithm for soft x-ray diagnostics system

    NASA Astrophysics Data System (ADS)

    Wojenski, A.; Kasprowicz, G.; Pozniak, K. T.; Zabolotny, W.; Byszuk, A.; Juszczyk, B.; Kolasinski, P.; Krawczyk, R. D.; Zienkiewicz, P.; Chernyshova, M.; Czarski, T.

    2015-09-01

    Soft X-ray (SXR) measurement systems working in tokamaks or with laser generated plasma can expect high photon fluxes. Therefore it is necessary to focus on data processing algorithms to have the best possible efficiency in term of processed photon events per second. This paper refers to recently designed algorithm and data-flow for implementation of charge data acquisition in FPGA. The algorithms are currently on implementation stage for the soft X-ray diagnostics system. In this paper despite of the charge processing algorithm is also described general firmware overview, data storage methods and other key components of the measurement system. The simulation section presents algorithm performance and expected maximum photon rate.

  16. Fault tolerant architectures for integrated aircraft electronics systems, task 2

    NASA Technical Reports Server (NTRS)

    Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.

    1984-01-01

    The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported.

  17. On-line data analysis and monitoring for H1 drift chambers

    NASA Astrophysics Data System (ADS)

    Düllmann, Dirk

    1992-05-01

    The on-line monitoring, slow control and calibration of the H1 central jet chamber uses a VME multiprocessor system to perform the analysis and a connected Macintosh computer as graphical interface to the operator on shift. Task of this system are: - analysis of event data including on-line track search, - on-line calibration from normal events and testpulse events, - control of the high voltage and monitoring of settings and currents, - monitoring of temperature, pressure and mixture of the chambergas. A program package is described which controls the dataflow between data aquisition, differnt VME CPUs and Macintosh. It allows to run off-line style programs for the different tasks.

  18. BPELPower—A BPEL execution engine for geospatial web services

    NASA Astrophysics Data System (ADS)

    Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi

    2012-10-01

    The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.

  19. DDDAS for space applications

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Pham, Khanh D.; Shen, Dan; Chen, Genshe

    2018-05-01

    The dynamic data-driven applications systems (DDDAS) paradigm is meant to inject measurements into the execution model for enhanced systems performance. One area off interest in DDDAS is for space situation awareness (SSA). For SSA, data is collected about the space environment to determine object motions, environments, and model updates. Dynamically coupling between the data and models enhances the capabilities of each system by complementing models with data for system control, execution, and sensor management. The paper overviews some of the recent developments in SSA made possible from DDDAS techniques which are for object detection, resident space object tracking, atmospheric models for enhanced sensing, cyber protection, and information management.

  20. XML-Based Visual Specification of Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad

    2001-01-01

    The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.

  1. Incorporating Brokers within Collaboration Environments

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; de Torcy, A.

    2013-12-01

    A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.

  2. 40 CFR 11.2 - Background.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Background. 11.2 Section 11.2 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SECURITY CLASSIFICATION REGULATIONS PURSUANT TO EXECUTIVE ORDER 11652 § 11.2 Background. While the Environmental Protection Agency does not...

  3. The restless mind.

    PubMed

    Smallwood, Jonathan; Schooler, Jonathan W

    2006-11-01

    This article reviews the hypothesis that mind wandering can be integrated into executive models of attention. Evidence suggests that mind wandering shares many similarities with traditional notions of executive control. When mind wandering occurs, the executive components of attention appear to shift away from the primary task, leading to failures in task performance and superficial representations of the external environment. One challenge for incorporating mind wandering into standard executive models is that it often occurs in the absence of explicit intention--a hallmark of controlled processing. However, mind wandering, like other goal-related processes, can be engaged without explicit awareness; thus, mind wandering can be seen as a goal-driven process, albeit one that is not directed toward the primary task. (c) 2006 APA, All Rights Reserved.

  4. Parenting Style Is Related to Executive Dysfunction After Brain Injury in Children

    PubMed Central

    Potter, Jennifer L.; Wade, Shari L.; Walz, Nicolay C.; Cassedy, Amy; Yeates, Keith O.; Stevens, M. Hank; Taylor, H. Gerry

    2013-01-01

    Objective The goal of this study was to examine how parenting style (authoritarian, authoritative, permissive) and family functioning are related to behavioral aspects of executive function following traumatic brain injury (TBI) in young children. Method Participants included 75 children with TBI and 97 children with orthopedic injuries (OI), ages 3–7 years at injury. Pre-injury parenting behavior and family functioning were assessed shortly after injury, and postinjury executive functions were assessed using the Behavior Rating Inventory of Executive Functioning (BRIEF; Gioia & Isquith, 2004) at 6, 12, and 18 months postinjury. Mixed model analyses, using pre-injury executive functioning (assessed by the BRIEF at baseline) as a covariate, examined the relationship of parenting style and family characteristics to executive functioning in children with moderate and severe TBI compared to OI. Results Among children with moderate TBI, higher levels of authoritarian parenting were associated with greater executive difficulties at 12 and 18 months following injury. Permissive and authoritative parenting styles were not significantly associated with postinjury executive skills. Finally, fewer family resources predicted more executive deficits across all of the groups, regardless of injury type. Conclusion These findings provide additional evidence regarding the role of the social and familial environment in emerging behavior problems following childhood TBI. PMID:21928918

  5. Boys have caught up, family influences still continue: Influences on executive functioning and behavioral self-regulation in elementary students in Germany.

    PubMed

    Gunzenhauser, Catherine; Saalbach, Henrik; von Suchodoletz, Antje

    2017-03-01

    The development of self-regulation is influenced by various child-level and family-level characteristics. Previous research focusing on the preschool period has reported a female advantage in self-regulation and negative effects of various adverse features of the family environment on self-regulation. The present study aimed to investigate growth in self-regulation (i.e., executive functioning and behavioral self-regulation) over 1 school year during early elementary school and to explore the influences of child sex, the level of home chaos, and family educational resources on self-regulation. Participants were 263 German children (51% boys; mean age 8.59 years, SD = 0.56 years). Data were collected during the fall and spring of the school year. A computer-based standardized test battery was used to assess executive functioning. Caregiver ratings assessed children's behavioral self-regulation and information on the family's home environment (chaotic home environment and educational resources). Results suggest growth in elementary school children's executive functioning over the course of the school year. However, there were no significant changes in children's behavioral self-regulation between the beginning and the end of Grade 3. Sex differences in executive functioning and behavioral self-regulation were found, suggesting an advantage for boys. Educational resources in the family but not chaotic family environment were significantly related to self-regulation at both time-points. Children from families with more educational resources scored higher on self-regulation measures compared to their counterparts from less advantaged families. We did not find evidence for child-level or family-level characteristics predicting self-regulation growth over time. Findings suggest that the male disadvantage in self-regulation documented in previous studies might be specific to characteristics of the sample and the context in which the data were collected. Adequate self-regulation skills should be fostered in both girls and boys. Results also add to the importance of supporting self-regulation development in children from disadvantaged family backgrounds early in life. © 2017 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  6. Localized Fault Recovery for Nested Fork-Join Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kestor, Gokcen; Krishnamoorthy, Sriram; Ma, Wenjing

    Nested fork-join programs scheduled using work stealing can automatically balance load and adapt to changes in the execution environment. In this paper, we design an approach to efficiently recover from faults encountered by these programs. Specifically, we focus on localized recovery of the task space in the presence of fail-stop failures. We present an approach to efficiently track, under work stealing, the relationships between the work executed by various threads. This information is used to identify and schedule the tasks to be re-executed without interfering with normal task execution. The algorithm precisely computes the work lost, incurs minimal re-execution overhead,more » and can recover from an arbitrary number of failures. Experimental evaluation demonstrates low overheads in the absence of failures, recovery overheads on the same order as the lost work, and much lower recovery costs than alternative strategies.« less

  7. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  8. ADAMS executive and operating system

    NASA Technical Reports Server (NTRS)

    Pittman, W. D.

    1981-01-01

    The ADAMS Executive and Operating System, a multitasking environment under which a variety of data reduction, display and utility programs are executed, a system which provides a high level of isolation between programs allowing them to be developed and modified independently, is described. The Airborne Data Analysis/Monitor System (ADAMS) was developed to provide a real time data monitoring and analysis capability onboard Boeing commercial airplanes during flight testing. It inputs sensor data from an airplane performance data by applying transforms to the collected sensor data, and presents this data to test personnel via various display media. Current utilization and future development are addressed.

  9. Executive Orders and the Trump Administration: A Guide for Social Workers.

    PubMed

    Lens, Vicki

    2018-07-01

    With the election of Donald Trump, policies antithetical to our clients' well-being, in areas as diverse as criminal justice, the environment, health care, and immigration, are being proposed at a rapid rate. Many of these policies are being transmitted through executive orders (EOs), a mechanism for exercising executive power less familiar to social workers. This article analyzes EOs issued by the Trump administration during its first five months, describing their purpose, content, and potential for policy change. Strategies for resistance and points of intervention for social workers and other advocates are also identified.

  10. AIDA: An Integrated Authoring Environment for Educational Software.

    ERIC Educational Resources Information Center

    Mendes, Antonio Jose; Mendes, Teresa

    1996-01-01

    Describes an integrated authoring environment, AIDA ("Ambiente Integrado de Desenvolvimento de Aplicacoes educacionais"), that was developed at the University of Coimbra (Portugal) for educational software. Highlights include the design module, a prototyping tool that allows for multimedia, simulations, and modularity; execution module;…

  11. Polytechnics in a "Postmarket" Environment.

    ERIC Educational Resources Information Center

    McNae, Denny

    2002-01-01

    Interviews with chief executive officers of three New Zealand polytechnics elicited themes regarding polytechnics' role in a "postmarket" environment: (1) collaboration is possible amidst fierce competition; (2) change is slow, externally driven, and complex; and (3) the polytechnic sector is in disarray. (Contains 27 references.) (SK)

  12. Mobile Wastewater Treatment Technology for Contingency Bases

    DTIC Science & Technology

    2012-05-24

    Def nse Cent rgy and Environment Contingency Base Wastewater Treatment Options Option Advantages Disadvantages Tanking and Trucking Offsite Low...National Defense Center for Energy and Environment Mobile Wastewater Treatment f or Contingency Bases, May 2012 1 National Def nse Cent rgy and...Environment DoD Executive Agent Mobile Wastewater Treatment Technology for Contingency Bases Shan Abeywickrama, NDCEE/CTC Elizabeth Keysar

  13. BioContainers: an open-source and community-driven framework for software standardization.

    PubMed

    da Veiga Leprevost, Felipe; Grüning, Björn A; Alves Aflitos, Saulo; Röst, Hannes L; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I; Perez-Riverol, Yasset

    2017-08-15

    BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). The software is freely available at github.com/BioContainers/. yperez@ebi.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  14. BioContainers: an open-source and community-driven framework for software standardization

    PubMed Central

    da Veiga Leprevost, Felipe; Grüning, Björn A.; Alves Aflitos, Saulo; Röst, Hannes L.; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C.; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I.; Perez-Riverol, Yasset

    2017-01-01

    Abstract Motivation BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). Availability and Implementation The software is freely available at github.com/BioContainers/. Contact yperez@ebi.ac.uk PMID:28379341

  15. [Effect of parents' occupational and life environment exposure during six months before pregnancy on executive function of preschool children].

    PubMed

    Ni, Lingling; Shao, Ting; Tao, Huihui; Sun, Yanli; Yan, Shuangqin; Gu, Chunli; Cao, Hui; Huang, Kun; Tao, Fangbiao; Tong, Shilu

    2016-02-01

    To examine the effect of parents' occupational and life exposure during six months before pregnancy on executive function of preschool children. Pregnant women involved in the study came from the Ma'anshan Birth Cohort Study,a part of the China-Anhui Birth Cohort Study. Between October 2008 and October 2010, pregnant women who accepted pregnancy care in four municipal medical and health institutions in Ma'anshan city were recruited as study objects. A total of 5,084 pregnant women and 4,669 singleton live births entered in this cohort. Between April 2014 and April 2015, a total of 3,803 pre-school children were followed up. Finally, except 32 preschool children did not have EF evaluation result, there were 3,771 children included in this study. By using self-designed " Maternal health handbook", we researched parents' general demographic characteristics, and life and occupational exposure during six months before pregnancy. To research preschool children's executive function, we used the Behavior Rating Inventory of Executive Function-Preschool Version (BRIEF-P). Univariate and multivariate statistical method was used to analyze the association of parents' life and occupational exposure during six months before pregnancy and preschool children's EF. 3,771 preschool children's detected rate of inhibitory self-control index (ISCI), flexibility index (FI), emergent metacognition index (EMI) and global executive composite (GEC) dysplasia were 4.8% (182), 2.3% (88), 16.5% (623) and 8.6% (324) respectively. During six months before pregnancy, children whose parents were lived in a noise environment (OR=1.86, 95% CI: 1.36-2.54), whose maternal were exposed to pesticides were the risk of ISCI dysplasia(OR=3.60, 95% CI: 1.45-8.95). During six months before pregnancy, children whose maternal were exposed to pesticides (OR=6.72, 95% CI: 2.50-18.07) and whose father were exposed to occupational lead (OR=2.10, 95% CI: 1.25-3.54) were the risk of FI dysplasia. During six months before pregnancy, children whose parents were lived in a noise environment (OR=1.42, 95%CI: 1.18-1.71) and whose father were exposed to occupational lead (OR=1.30, 95%CI: 1.02-1.65) were the risk of EMI dysplasia. During six months before pregnancy, children whose parents were lived in a noise environment (OR=1.58, 95% CI: 1.24-2.01) and whose maternal were exposed to pesticides (OR=2.39, 95% CI: 1.02-5.58) were the risk of GEC dysplasia. The development of executive function is worse among preschool children whose parents live in noise environment, mother exposed to pesticides, and father exposed to occupational lead during six months before pregnancy.

  16. The Emerging Importance of Business Process Standards in the Federal Government

    DTIC Science & Technology

    2006-02-23

    delivers enough value for its commercialization into the general industry. Today, we are seeing standards such as SOA, BPMN and BPEL hit that...Process Modeling Notation ( BPMN ) and the Business Process Execution Language (BPEL). BPMN provides a standard representation for capturing and...execution. The combination of BPMN and BPEL offers organizations the potential to standardize processes in a distributed environment, enabling

  17. Understanding the Situation in the Urban Environment

    DTIC Science & Technology

    2001-05-15

    second type of information, termed executable information, communciates a clearly understood vision of the operation and desired outcome after a decision...information necessary for the commander as situational awareness information which creates understanding and execution information which communciates a...technological advances yet to take place in such fields as computers or remotely controlled sensors, ൿ DOrner, 39. 56 Creveld, 265. 22 will be less opaque

  18. Putting time into proof outlines

    NASA Technical Reports Server (NTRS)

    Schneider, Fred B.; Bloom, Bard; Marzullo, Keith

    1993-01-01

    A logic for reasoning about timing properties of concurrent programs is presented. The logic is based on Hoare-style proof outlines and can handle maximal parallelism as well as certain resource-constrained execution environments. The correctness proof for a mutual exclusion protocol that uses execution timings in a subtle way illustrates the logic in action. A soundness proof using structural operational semantics is outlined in the appendix.

  19. Are the deficits in navigational abilities present in the Williams syndrome related to deficits in the backward inhibition?

    PubMed Central

    Foti, Francesca; Sdoia, Stefano; Menghini, Deny; Mandolesi, Laura; Vicari, Stefano; Ferlazzo, Fabio; Petrosini, Laura

    2015-01-01

    Williams syndrome (WS) is associated with a distinct profile of relatively proficient skills within the verbal domain compared to the severe impairment of visuo-spatial processing. Abnormalities in executive functions and deficits in planning ability and spatial working memory have been described. However, to date little is known about the influence of executive function deficits on navigational abilities in WS. This study aimed at analyzing in WS individuals a specific executive function, the backward inhibition (BI) that allows individuals to flexibly adapt to continuously changing environments. A group of WS individuals and a mental age- and gender-matched group of typically developing children were subjected to three task-switching experiments requiring visuospatial or verbal material to be processed. Results showed that WS individuals exhibited clear BI deficits during visuospatial task-switching paradigms and normal BI effect during verbal task-switching paradigm. Overall, the present results suggest that the BI involvement in updating environment representations during navigation may influence WS navigational abilities. PMID:25852605

  20. The role of metrics and measurements in a software intensive total quality management environment

    NASA Technical Reports Server (NTRS)

    Daniels, Charles B.

    1992-01-01

    Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.

  1. Technical integration of hippocampus, Basal Ganglia and physical models for spatial navigation.

    PubMed

    Fox, Charles; Humphries, Mark; Mitchinson, Ben; Kiss, Tamas; Somogyvari, Zoltan; Prescott, Tony

    2009-01-01

    Computational neuroscience is increasingly moving beyond modeling individual neurons or neural systems to consider the integration of multiple models, often constructed by different research groups. We report on our preliminary technical integration of recent hippocampal formation, basal ganglia and physical environment models, together with visualisation tools, as a case study in the use of Python across the modelling tool-chain. We do not present new modeling results here. The architecture incorporates leaky-integrator and rate-coded neurons, a 3D environment with collision detection and tactile sensors, 3D graphics and 2D plots. We found Python to be a flexible platform, offering a significant reduction in development time, without a corresponding significant increase in execution time. We illustrate this by implementing a part of the model in various alternative languages and coding styles, and comparing their execution times. For very large-scale system integration, communication with other languages and parallel execution may be required, which we demonstrate using the BRAHMS framework's Python bindings.

  2. Executive control systems in the engineering design environment. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hurst, P. W.

    1985-01-01

    An executive control system (ECS) is a software structure for unifying various applications codes into a comprehensive system. It provides a library of applications, a uniform access method through a cental user interface, and a data management facility. A survey of twenty-four executive control systems designed to unify various CAD/CAE applications for use in diverse engineering design environments within government and industry was conducted. The goals of this research were to establish system requirements to survey state-of-the-art architectural design approaches, and to provide an overview of the historical evolution of these systems. Foundations for design are presented and include environmental settings, system requirements, major architectural components, and a system classification scheme based on knowledge of the supported engineering domain(s). An overview of the design approaches used in developing the major architectural components of an ECS is presented with examples taken from the surveyed systems. Attention is drawn to four major areas of ECS development: interdisciplinary usage; standardization; knowledge utilization; and computer science technology transfer.

  3. A DICOM-based 2nd generation Molecular Imaging Data Grid implementing the IHE XDS-i integration profile.

    PubMed

    Lee, Jasper; Zhang, Jianguo; Park, Ryan; Dagliyan, Grant; Liu, Brent; Huang, H K

    2012-07-01

    A Molecular Imaging Data Grid (MIDG) was developed to address current informatics challenges in archival, sharing, search, and distribution of preclinical imaging studies between animal imaging facilities and investigator sites. This manuscript presents a 2nd generation MIDG replacing the Globus Toolkit with a new system architecture that implements the IHE XDS-i integration profile. Implementation and evaluation were conducted using a 3-site interdisciplinary test-bed at the University of Southern California. The 2nd generation MIDG design architecture replaces the initial design's Globus Toolkit with dedicated web services and XML-based messaging for dedicated management and delivery of multi-modality DICOM imaging datasets. The Cross-enterprise Document Sharing for Imaging (XDS-i) integration profile from the field of enterprise radiology informatics was adopted into the MIDG design because streamlined image registration, management, and distribution dataflow are likewise needed in preclinical imaging informatics systems as in enterprise PACS application. Implementation of the MIDG is demonstrated at the University of Southern California Molecular Imaging Center (MIC) and two other sites with specified hardware, software, and network bandwidth. Evaluation of the MIDG involves data upload, download, and fault-tolerance testing scenarios using multi-modality animal imaging datasets collected at the USC Molecular Imaging Center. The upload, download, and fault-tolerance tests of the MIDG were performed multiple times using 12 collected animal study datasets. Upload and download times demonstrated reproducibility and improved real-world performance. Fault-tolerance tests showed that automated failover between Grid Node Servers has minimal impact on normal download times. Building upon the 1st generation concepts and experiences, the 2nd generation MIDG system improves accessibility of disparate animal-model molecular imaging datasets to users outside a molecular imaging facility's LAN using a new architecture, dataflow, and dedicated DICOM-based management web services. Productivity and efficiency of preclinical research for translational sciences investigators has been further streamlined for multi-center study data registration, management, and distribution.

  4. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK

    PubMed Central

    2014-01-01

    Background Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system’s set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This “code-based” approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. Results As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. Conclusions The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts. PMID:24725437

  5. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK.

    PubMed

    Wang, Kaier; Steyn-Ross, Moira L; Steyn-Ross, D Alistair; Wilson, Marcus T; Sleigh, Jamie W; Shiraishi, Yoichi

    2014-04-11

    Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system's set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This "code-based" approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts.

  6. CEOs, Information, and Decision Making: Scanning the Environment for Strategic Advantage.

    ERIC Educational Resources Information Center

    Auster, Ethel; Choo, Chun Wei

    1994-01-01

    Describes a study that investigated how CEOs (Chief Executive Officers) in the Canadian publishing and telecommunications industries acquire and use information about the business environment. Topics discussed include environmental scanning; perceived environmental uncertainty; information sources; information use in decision making; and a…

  7. A Digital Repository and Execution Platform for Interactive Scholarly Publications in Neuroscience.

    PubMed

    Hodge, Victoria; Jessop, Mark; Fletcher, Martyn; Weeks, Michael; Turner, Aaron; Jackson, Tom; Ingram, Colin; Smith, Leslie; Austin, Jim

    2016-01-01

    The CARMEN Virtual Laboratory (VL) is a cloud-based platform which allows neuroscientists to store, share, develop, execute, reproduce and publicise their work. This paper describes new functionality in the CARMEN VL: an interactive publications repository. This new facility allows users to link data and software to publications. This enables other users to examine data and software associated with the publication and execute the associated software within the VL using the same data as the authors used in the publication. The cloud-based architecture and SaaS (Software as a Service) framework allows vast data sets to be uploaded and analysed using software services. Thus, this new interactive publications facility allows others to build on research results through reuse. This aligns with recent developments by funding agencies, institutions, and publishers with a move to open access research. Open access provides reproducibility and verification of research resources and results. Publications and their associated data and software will be assured of long-term preservation and curation in the repository. Further, analysing research data and the evaluations described in publications frequently requires a number of execution stages many of which are iterative. The VL provides a scientific workflow environment to combine software services into a processing tree. These workflows can also be associated with publications and executed by users. The VL also provides a secure environment where users can decide the access rights for each resource to ensure copyright and privacy restrictions are met.

  8. Classification of Movement and Inhibition Using a Hybrid BCI.

    PubMed

    Chmura, Jennifer; Rosing, Joshua; Collazos, Steven; Goodwin, Shikha J

    2017-01-01

    Brain-computer interfaces (BCIs) are an emerging technology that are capable of turning brain electrical activity into commands for an external device. Motor imagery (MI)-when a person imagines a motion without executing it-is widely employed in BCI devices for motor control because of the endogenous origin of its neural control mechanisms, and the similarity in brain activation to actual movements. Challenges with translating a MI-BCI into a practical device used outside laboratories include the extensive training required, often due to poor user engagement and visual feedback response delays; poor user flexibility/freedom to time the execution/inhibition of their movements, and to control the movement type (right arm vs. left leg) and characteristics (reaching vs. grabbing); and high false positive rates of motion control. Solutions to improve sensorimotor activation and user performance of MI-BCIs have been explored. Virtual reality (VR) motor-execution tasks have replaced simpler visual feedback (smiling faces, arrows) and have solved this problem to an extent. Hybrid BCIs (hBCIs) implementing an additional control signal to MI have improved user control capabilities to a limited extent. These hBCIs either fail to allow the patients to gain asynchronous control of their movements, or have a high false positive rate. We propose an immersive VR environment which provides visual feedback that is both engaging and immediate, but also uniquely engages a different cognitive process in the patient that generates event-related potentials (ERPs). These ERPs provide a key executive function for the users to execute/inhibit movements. Additionally, we propose signal processing strategies and machine learning algorithms to move BCIs toward developing long-term signal stability in patients with distinctive brain signals and capabilities to control motor signals. The hBCI itself and the VR environment we propose would help to move BCI technology outside laboratory environments for motor rehabilitation in hospitals, and potentially for controlling a prosthetic.

  9. Classification of Movement and Inhibition Using a Hybrid BCI

    PubMed Central

    Chmura, Jennifer; Rosing, Joshua; Collazos, Steven; Goodwin, Shikha J.

    2017-01-01

    Brain-computer interfaces (BCIs) are an emerging technology that are capable of turning brain electrical activity into commands for an external device. Motor imagery (MI)—when a person imagines a motion without executing it—is widely employed in BCI devices for motor control because of the endogenous origin of its neural control mechanisms, and the similarity in brain activation to actual movements. Challenges with translating a MI-BCI into a practical device used outside laboratories include the extensive training required, often due to poor user engagement and visual feedback response delays; poor user flexibility/freedom to time the execution/inhibition of their movements, and to control the movement type (right arm vs. left leg) and characteristics (reaching vs. grabbing); and high false positive rates of motion control. Solutions to improve sensorimotor activation and user performance of MI-BCIs have been explored. Virtual reality (VR) motor-execution tasks have replaced simpler visual feedback (smiling faces, arrows) and have solved this problem to an extent. Hybrid BCIs (hBCIs) implementing an additional control signal to MI have improved user control capabilities to a limited extent. These hBCIs either fail to allow the patients to gain asynchronous control of their movements, or have a high false positive rate. We propose an immersive VR environment which provides visual feedback that is both engaging and immediate, but also uniquely engages a different cognitive process in the patient that generates event-related potentials (ERPs). These ERPs provide a key executive function for the users to execute/inhibit movements. Additionally, we propose signal processing strategies and machine learning algorithms to move BCIs toward developing long-term signal stability in patients with distinctive brain signals and capabilities to control motor signals. The hBCI itself and the VR environment we propose would help to move BCI technology outside laboratory environments for motor rehabilitation in hospitals, and potentially for controlling a prosthetic. PMID:28860986

  10. The effect of healthy dietary consumption on executive cognitive functioning in children and adolescents: a systematic review.

    PubMed

    Cohen, J F W; Gorski, M T; Gruber, S A; Kurdziel, L B F; Rimm, E B

    2016-09-01

    A systematic review was conducted to evaluate whether healthier dietary consumption among children and adolescents impacts executive functioning. PubMed, Education Resources Information Center, PsychINFO and Thomson Reuters' Web of Science databases were searched, and studies of executive functioning among children or adolescents aged 6-18 years, which examined food quality, macronutrients and/or foods, were included. Study quality was also assessed. In all, twenty-one studies met inclusion criteria. Among the twelve studies examining food quality (n 9) or macronutrient intakes (n 4), studies examining longer-term diet (n 6) showed positive associations between healthier overall diet quality and executive functioning, whereas the studies examining the acute impact of diet (n 6) were inconsistent but suggestive of improvements in executive functioning with better food quality. Among the ten studies examining foods, overall, there was a positive association between healthier foods (e.g. whole grains, fish, fruits and/or vegetables) and executive function, whereas less-healthy snack foods, sugar-sweetened beverages and red/processed meats were inversely associated with executive functioning. Taken together, evidence suggests a positive association between healthy dietary consumption and executive functioning. Additional studies examining the effects of healthier food consumption, as well as macronutrients, on executive functioning are warranted. These studies should ideally be conducted in controlled environments and use validated cognitive tests.

  11. Haagen-Smit Prize 2014

    NASA Astrophysics Data System (ADS)

    2015-02-01

    The Executive Editors and the Publisher of Atmospheric Environment take great pleasure in announcing the 2014 ''Haagen-Smit Prize", designed to recognize outstanding papers published in Atmospheric Environment. The Prize is named in honor of Prof. Arie Jan Haagen-Smit, a pioneer in the field of air pollution and one of the first editors of the International Journal of Air Pollution, a predecessor to Atmospheric Environment.

  12. Haagen-Smit Prize 2015

    NASA Astrophysics Data System (ADS)

    2016-01-01

    The Executive Editors and the Publisher of Atmospheric Environment take great pleasure in announcing the 2015 ''Haagen-Smit Prize;, designed to recognize outstanding papers published in Atmospheric Environment. The Prize is named in honor of Prof. Arie Jan Haagen-Smit, a pioneer in the field of air pollution and one of the first editors of the International Journal of Air Pollution, a predecessor to Atmospheric Environment.

  13. Haagen-Smit Prize 2016

    NASA Astrophysics Data System (ADS)

    Singh, Hanwant

    2017-03-01

    The Executive Editors and the Publisher of Atmospheric Environment take great pleasure in announcing the 2016 "Haagen-Smit Prize", designed to recognize outstanding papers published in Atmospheric Environment. The Prize is named in honor of Prof. Arie Jan Haagen-Smit, a pioneer in the field of air pollution and one of the first editors of the International Journal of Air Pollution, a predecessor to Atmospheric Environment.

  14. Do Hours Spent Viewing Television at Ages 3 and 4 Predict Vocabulary and Executive Functioning at Age 5?

    ERIC Educational Resources Information Center

    Blankson, A. Nayena; O'Brien, Marion; Leerkes, Esther M.; Calkins, Susan D.; Marcovitch, Stuart D.

    2015-01-01

    We examined the impact of television viewing at ages 3 and 4 on vocabulary and at age 5 on executive functioning in the context of home learning environment and parental scaffolding. Children (N = 263) were seen in the lab when they were 3 years old and then again at ages 4 and 5. Parents completed measures assessing child television viewing and…

  15. Intelligent Tutoring Methods for Optimizing Learning Outcomes with Embedded Training

    DTIC Science & Technology

    2009-10-01

    after action review. Particularly with free - play virtual environments, it is important to constrain the development task for constructing an...evaluation approach. Attempts to model all possible variations of correct performance can be prohibitive in free - play scenarios, and so for such conditions...member R for proper execution during free - play execution. In the first tier, the evaluation must know when it applies, or more specifically, when

  16. Suppression of cognitive function in hyperthermia; From the viewpoint of executive and inhibitive cognitive processing

    NASA Astrophysics Data System (ADS)

    Shibasaki, Manabu; Namba, Mari; Oshiro, Misaki; Kakigi, Ryusuke; Nakata, Hiroki

    2017-03-01

    Climate change has had a widespread impact on humans and natural systems. Heat stroke is a life-threatening condition in severe environments. The execution or inhibition of decision making is critical for survival in a hot environment. We hypothesized that, even with mild heat stress, not only executive processing, but also inhibitory processing may be impaired, and investigated the effectiveness of body cooling approaches on these processes using the Go/No-go task with electroencephalographic event-related potentials. Passive heat stress increased esophageal temperature (Tes) by 1.30 ± 0.24 °C and decreased cerebral perfusion and thermal comfort. Mild heat stress reduced the amplitudes of the Go-P300 component (i.e. execution) and No-go-P300 component (i.e. inhibition). Cerebral perfusion and thermal comfort recovered following face/head cooling, however, the amplitudes of the Go-P300 and No-go-P300 components remained reduced. During whole-body cooling, the amplitude of the Go-P300 component returned to the pre-heat baseline, whereas that of the No-go-P300 component remained reduced. These results suggest that local cooling of the face and head does not restore impaired cognitive processing during mild heat stress, and response inhibition remains impaired despite the return to normothermia.

  17. 48 CFR 970.0470-1 - General.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... establish the environment, safety, and health portion of the list identified in paragraph (b) of this section. (d) Environmental, safety, and health (ES&H) requirements appropriate for work conducted under a..., Integration of Environment, Safety, and Health into Work Planning and Execution. When such a process is used...

  18. 48 CFR 970.0470-1 - General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... establish the environment, safety, and health portion of the list identified in paragraph (b) of this section. (d) Environmental, safety, and health (ES&H) requirements appropriate for work conducted under a..., Integration of Environment, Safety, and Health into Work Planning and Execution. When such a process is used...

  19. 48 CFR 970.0470-1 - General.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... establish the environment, safety, and health portion of the list identified in paragraph (b) of this section. (d) Environmental, safety, and health (ES&H) requirements appropriate for work conducted under a..., Integration of Environment, Safety, and Health into Work Planning and Execution. When such a process is used...

  20. 48 CFR 970.0470-1 - General.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... establish the environment, safety, and health portion of the list identified in paragraph (b) of this section. (d) Environmental, safety, and health (ES&H) requirements appropriate for work conducted under a..., Integration of Environment, Safety, and Health into Work Planning and Execution. When such a process is used...

  1. 40 CFR 13.26 - Payment of compromised claims.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Payment of compromised claims. 13.26 Section 13.26 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL CLAIMS COLLECTION... will be required to execute a confess-judgment agreement which accelerates payment of the balance due...

  2. 7 CFR 1940.301 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... requirements and policies: (1) The National Environmental Policy Act, 42 U.S.C. 4321; (2) Safe Drinking Water...) Executive Order 11593, Protection and Enhancement of the Cultural Environment (See subpart F of part 1901 of... Cultural Environment (See subpart F of part 1901 of this chapter for more specific implementation...

  3. 78 FR 30733 - Modernizing Federal Infrastructure Review and Permitting Regulations, Policies, and Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-22

    ... Heads of Executive Departments and Agencies Reliable, safe, and resilient infrastructure is the backbone... and agencies (agencies) have achieved better outcomes for communities and the environment and realized... major infrastructure projects by half, while also improving outcomes for communities and the environment...

  4. [Neuropsychological evaluation of the executive functions by means of virtual reality].

    PubMed

    Climent-Martínez, Gema; Luna-Lario, Pilar; Bombín-González, Igor; Cifuentes-Rodríguez, Alicia; Tirapu-Ustárroz, Javier; Díaz-Orueta, Unai

    2014-05-16

    Executive functions include a wide range of self regulatory functions that allow control, organization and coordination of other cognitive functions, emotional responses and behaviours. The traditional approach to evaluate these functions, by means of paper and pencil neuropsychological tests, shows a greater than expected performance within the normal range for patients whose daily life difficulties would predict an inferior performance. These discrepancies suggest that classical neuropsychological tests may not adequately reproduce the complexity and dynamic nature of real life situations. Latest developments in the field of virtual reality offer interesting options for the neuropsychological assessment of many cognitive processes. Virtual reality reproduces three-dimensional environments with which the patient interacts in a dynamic way, with a sense of immersion in the environment similar to the presence and exposure to a real environment. Furthermore, the presentation of these stimuli, as well as distractors and other variables, may be controlled in a systematic way. Moreover, more consistent and precise answers may be obtained, and an in-depth analysis of them is possible. The present review shows current problems in neuropsychological evaluation of executive functions and latest advances in the consecution of higher preciseness and validity of the evaluation by means of new technologies and virtual reality, with special mention to some developments performed in Spain.

  5. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  6. Creating Supportive Environments and Thriving in a Volatile, Uncertain, Complex, and Ambiguous World.

    PubMed

    Pabico, Christine

    2015-10-01

    Nurse executives (NEs) are operating in a volatile, uncertain, complex, and ambiguous world. NEs must create supportive environments that promote staff empowerment, resilience, and alignment, to ensure organizational success. In addition, NEs need to be transparent and create a culture of partnership with their staff. The ability of NEs to create and sustain this environment is vital in supporting teams to successfully navigate in today's healthcare environment.

  7. Grid heterogeneity in in-silico experiments: an exploration of drug screening using DOCK on cloud environments.

    PubMed

    Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason

    2010-01-01

    Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.

  8. Administrative Effectiveness in a Political Environment.

    ERIC Educational Resources Information Center

    Isherwood, G. B.; And Others

    Of 35 prominent Chief Executive Officers (CEO's) from 10 Canadian provinces participating in this study, 31 were interviewed by telephone and 4 in writing. The vast majority of CEO's (82 percent) agreed that they work in an increasingly political environment. Many CEO's perform a "screening function" between community groups and the…

  9. The Contribution of Visualization to Learning Computer Architecture

    ERIC Educational Resources Information Center

    Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy

    2007-01-01

    This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…

  10. Strategic Opportunities for Cooperative Extension. Executive Summary

    ERIC Educational Resources Information Center

    National Association of State Universities and Land-Grant Colleges, 2007

    2007-01-01

    In this new century, opportunities exist to help advance America's greatness in the midst of many challenges. Energy, water, food, environment, health, economic productivity, global competitiveness, and the quality of the living environments are all paramount to the future. Extension is, as a part of higher education, prepared to create new…

  11. 40 CFR 11.6 - Access by historical researchers and former Government officials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Access by historical researchers and former Government officials. 11.6 Section 11.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SECURITY CLASSIFICATION REGULATIONS PURSUANT TO EXECUTIVE ORDER 11652 § 11.6 Access by historical...

  12. Metacomponential Development in a Logo Programming Environment.

    ERIC Educational Resources Information Center

    Clements, Douglas H.

    1990-01-01

    Effects of a theoretically based LOGO programing environment on executive metacognitive abilities were studied for 48 third graders who took pretests and posttests after LOGO training or no training. The LOGO group scored higher than comparisons on two metacomponential measures: correctness of response and use of an individual metacomponent. (SLD)

  13. Technology assessment in the Executive Office of the President

    NASA Technical Reports Server (NTRS)

    Kidd, C. V.

    1972-01-01

    The involvement of the President with technology, directly and indirectly, and the best way in which his responsibilities can be discharged are discussed. Technology assessment is considered essential at all levels of the Executive agencies, but the capacity of the agencies for assessment is limited and needs to be supplemented within the Executive Branch. Complete centralization of technological assessment is felt to be ineffective. The role of the Executive Office in initiating proposals for Presidential action and sustaining links with Congress are outlined, and the apparatus for technology assessment is described, emphasizing the Office of Science and Technology. A significant area of technology assessment for the Executive Office is the field of environmental quality, and the duties of the Environmental Quality Council are summarized. It is suggested that it may be more effective to set up a separate organization for the restoration and protection of the environment and to define the task in terms of what is to be protected rather than in terms of technology.

  14. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.

    PubMed

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-08-30

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.

  15. A Scalable and Dynamic Testbed for Conducting Penetration-Test Training in a Laboratory Environment

    DTIC Science & Technology

    2015-03-01

    entry point through which to execute a payload to accomplish a higher-level goal: executing arbitrary code, escalating privileges , pivoting...Mobile Ad Hoc Network Emulator (EMANE)26 can emulate the entire network stack (physical to application -layer protocols). 2. Methodology To build a...to host Windows, Linux, MacOS, Android , and other operating systems without much effort. 4 E. A simple and automatic “restore” function: Many

  16. Report on Activities and Programs for Countering Proliferation and NBC Terrorism. Volume 1, Executive Summary, Addendum to 2011 Report

    DTIC Science & Technology

    2013-06-01

    Executive Branch report on research , development, and acquisition (RDA) programs to Combat Weapons of Mass Destruction (WMD). Other interagency committees...characterize, secure, disable , and/or destroy a state or non-state actor’s WMD programs and related capabilities in hostile or uncertain environments. Threat...special operations, and security operations to defend against conventionally and unconventionally delivered WMD. WMD Consequence Management. Actions

  17. PUP: An Architecture to Exploit Parallel Unification in Prolog

    DTIC Science & Technology

    1988-03-01

    environment stacking mo del similar to the Warren Abstract Machine [23] since it has been shown to be super ior to other known models (see [21]). The storage...execute in groups of independent operations. Unifications belonging to different group s may not overlap. Also unification operations belonging to the...since all parallel operations on the unification units must complete before any of the units can star t executing the next group of parallel

  18. Productive work groups in complex hospital units. Proposed contributions of the nurse executive.

    PubMed

    Sheafor, M

    1991-05-01

    The Fiedler and Garcia cognitive resources contingency model of leadership offers a new approach for nurse executives to influence the productivity of work groups led by nurse managers. The author offers recommendations toward achieving the relatively stress-free environment for nurse managers specified by the model using Schmeiding's application of Orlando's communication theory to nursing administration. Suggestions for incorporating these insights into graduate education for nursing administration follow.

  19. Developmental Effects of Family Environment on Outcomes in Pediatric Cochlear Implant Recipients

    PubMed Central

    Holt, Rachael Frush; Beer, Jessica; Kronenberger, William G.; Pisoni, David B.

    2012-01-01

    Objective To examine and compare the family environment of preschool- and school-age children with cochlear implants and assess its influence on children’s executive function and spoken language skills. Study Design Retrospective between-subjects design. Setting Outpatient research laboratory. Patients Prelingually deaf children with cochlear implants and no additional disabilities, and their families. Intervention(s) Cochlear implantation and speech-language therapy. Main Outcome Measures Parents completed the Family Environment Scale and the Behavior Rating Inventory of Executive Function (or the preschool version). Children were tested using the Peabody Picture Vocabulary Test-4 and either the Preschool Language Scales-4 or the Clinical Evaluation of Language Fundamentals–4. Results The family environments of children with cochlear implants differed from normative data obtained from hearing children, but average scores were within one standard deviation of norms on all subscales. Families of school-age children reported higher levels of control than those of preschool-age children. Preschool-age children had fewer problems with emotional control when families reported higher levels of support and lower levels of conflict. School-age children had fewer problems with inhibition but more problems with shifting of attention when families reported lower levels of conflict. School-age children’s receptive vocabularies were enhanced by families with lower levels of control and higher levels of organization. Conclusions Family environment and its relation to language skills and executive function development differed across the age groups in this sample of children with cochlear implants. Because family dynamics is one developmental/environmental factor that can be altered with therapy and education, the present results have important clinical implications for family-based interventions for deaf children with cochlear implants. PMID:23151776

  20. Adaptable state based control system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Dvorak, Daniel L. (Inventor); Gostelow, Kim P. (Inventor); Starbird, Thomas W. (Inventor); Gat, Erann (Inventor); Chien, Steve Ankuo (Inventor); Keller, Robert M. (Inventor)

    2004-01-01

    An autonomous controller, comprised of a state knowledge manager, a control executor, hardware proxies and a statistical estimator collaborates with a goal elaborator, with which it shares common models of the behavior of the system and the controller. The elaborator uses the common models to generate from temporally indeterminate sets of goals, executable goals to be executed by the controller. The controller may be updated to operate in a different system or environment than that for which it was originally designed by the replacement of shared statistical models and by the instantiation of a new set of state variable objects derived from a state variable class. The adaptation of the controller does not require substantial modification of the goal elaborator for its application to the new system or environment.

  1. The Rapid Integration and Test Environment: A Process for Achieving Software Test Acceptance

    DTIC Science & Technology

    2010-05-01

    Test Environment : A Process for Achieving Software Test Acceptance 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...mlif`v= 365= k^s^i=mlpqdo^ar^qb=p`elli= The Rapid Integration and Test Environment : A Process for Achieving Software Test Acceptance Patrick V...was awarded the Bronze Star. Introduction The Rapid Integration and Test Environment (RITE) initiative, implemented by the Program Executive Office

  2. Principles of Faithful Execution in the implementation of trusted objects.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarman, Thomas David; Campbell, Philip LaRoche; Pierson, Lyndon George

    2003-09-01

    We begin with the following definitions: Definition: A trusted volume is the computing machinery (including communication lines) within which data is assumed to be physically protected from an adversary. A trusted volume provides both integrity and privacy. Definition: Program integrity consists of the protection necessary to enable the detection of changes in the bits comprising a program as specified by the developer, for the entire time that the program is outside a trusted volume. For ease of discussion we consider program integrity to be the aggregation of two elements: instruction integrity (detection of changes in the bits within an instructionmore » or block of instructions), and sequence integrity (detection of changes in the locations of instructions within a program). Definition: Faithful Execution (FE) is a type of software protection that begins when the software leaves the control of the developer and ends within the trusted volume of a target processor. That is, FE provides program integrity, even while the program is in execution. (As we will show below, FE schemes are a function of trusted volume size.) FE is a necessary quality for computing. Without it we cannot trust computations. In the early days of computing FE came for free since the software never left a trusted volume. At that time the execution environment was the same as the development environment. In some circles that environment was referred to as a ''closed shop:'' all of the software that was used there was developed there. When an organization bought a large computer from a vendor the organization would run its own operating system on that computer, use only its own editors, only its own compilers, only its own debuggers, and so on. However, with the continuing maturity of computing technology, FE becomes increasingly difficult to achieve« less

  3. Quality circles: the nurse executive as mentor.

    PubMed

    Flarey, D L

    1991-12-01

    Changes within and around the health care environment are forcing health care executives to reexamine their managerial and leadership styles to confront the resulting turbulence. The nurse executive is charged with the profound responsibility of directing the delivery of nursing care throughout the organization. Care delivered today must be of high quality. Declining financial resources as well as personnel shortages cause the executive to be an effective innovator in meeting the increasing demands. Quality circles offer the nurse executive an avenue of recourse. Circles have been effectively implemented in the health care setting, as has been consistently documented over time. By way of a participative management approach, quality circles may lead to increased employee morale and productivity, cost savings, and decreased employee turnover rates, as well as realization of socialization and self-actualization needs. A most effective approach to their introduction would be implementation at the first-line manager level. This promotes an acceptance of the concept at the management level as well as a training course for managers to implement the process at the unit level. The nurse executive facilitates the process at the first-line manager level. This facilitation will cause a positive outcome to diffuse throughout the entire organization. Quality circles offer the nurse executive the opportunity to challenge the existing environmental turmoil and effect a positive and lasting change.

  4. The role of executive functioning in quality of life in pediatric intractable epilepsy.

    PubMed

    Love, Christina Eguizabal; Webbe, Frank; Kim, Gunha; Lee, Ki Hyeong; Westerveld, Michael; Salinas, Christine M

    2016-11-01

    Children with epilepsy are vulnerable to executive dysfunction, but the relationship between executive functioning (EF) and quality of life (QOL) in children with epilepsy is not fully delineated. This exploratory study elucidated the relationship between ecological EF and QOL in pediatric intractable epilepsy. Fifty-four consecutively referred pediatric epilepsy surgery candidates and their parents were administered IQ measures, the Behavior Rating Inventory of Executive Function (BRIEF), and the Quality of Life in Childhood Epilepsy (QOLCE) as part of a comprehensive neuropsychological evaluation. A significant difference was found in QOL between those with and without clinical impairments on the BRIEF [t(52)=3.93; p<.001]. That is, children with executive dysfunction had lower overall QOL. All seizure variables and BRIEF scales were associated with overall QOL [F(12, 40)=6.508; p=.001; R 2 =.661]. Working memory from the BRIEF was the most frequently elevated scale in our sample (57%). Those with executive dysfunction had 9.7 times the risk of having poor QOL. Poor EF control according to behavior ratings is significantly related to QOL in intractable pediatric epilepsy. Identification of executive dysfunction in home environments is an essential component of presurgical evaluations and target for intervention, which may improve QOL. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Supervising simulations with the Prodiguer Messaging Platform

    NASA Astrophysics Data System (ADS)

    Greenslade, Mark; Carenton, Nicolas; Denvil, Sebastien

    2015-04-01

    At any one moment in time, researchers affiliated with the Institut Pierre Simon Laplace (IPSL) climate modeling group, are running hundreds of global climate simulations. These simulations execute upon a heterogeneous set of High Performance Computing (HPC) environments spread throughout France. The IPSL's simulation execution runtime is called libIGCM (library for IPSL Global Climate Modeling group). libIGCM has recently been enhanced so as to support realtime operational use cases. Such use cases include simulation monitoring, data publication, environment metrics collection, automated simulation control … etc. At the core of this enhancement is the Prodiguer messaging platform. libIGCM now emits information, in the form of messages, for remote processing at IPSL servers in Paris. The remote message processing takes several forms, for example: 1. Persisting message content to database(s); 2. Notifying an operator of changes in a simulation's execution status; 3. Launching rollback jobs upon simulation failure; 4. Dynamically updating controlled vocabularies; 5. Notifying downstream applications such as the Prodiguer web portal; We will describe how the messaging platform has been implemented from a technical perspective and demonstrate the Prodiguer web portal receiving realtime notifications.

  6. Timeliner: Automating Procedures on the ISS

    NASA Technical Reports Server (NTRS)

    Brown, Robert; Braunstein, E.; Brunet, Rick; Grace, R.; Vu, T.; Zimpfer, Doug; Dwyer, William K.; Robinson, Emily

    2002-01-01

    Timeliner has been developed as a tool to automate procedural tasks. These tasks may be sequential tasks that would typically be performed by a human operator, or precisely ordered sequencing tasks that allow autonomous execution of a control process. The Timeliner system includes elements for compiling and executing sequences that are defined in the Timeliner language. The Timeliner language was specifically designed to allow easy definition of scripts that provide sequencing and control of complex systems. The execution environment provides real-time monitoring and control based on the commands and conditions defined in the Timeliner language. The Timeliner sequence control may be preprogrammed, compiled from Timeliner "scripts," or it may consist of real-time, interactive inputs from system operators. In general, the Timeliner system lowers the workload for mission or process control operations. In a mission environment, scripts can be used to automate spacecraft operations including autonomous or interactive vehicle control, performance of preflight and post-flight subsystem checkouts, or handling of failure detection and recovery. Timeliner may also be used for mission payload operations, such as stepping through pre-defined procedures of a scientific experiment.

  7. Loyalty in managed care: a leadership system.

    PubMed

    Kerns, C D

    2000-01-01

    Healthcare executives are given a comprehensive and integrated ten-step system to lead their organization toward stabilizing a financial base, improving profitability, and differentiating themselves in the marketplace. This executive guide to implementing loyalty-based leadership can be adapted and used on an immediate basis by healthcare leaders. This article is a useful resource for healthcare executives as they move to make loyalty an organizational resource. Effectively managing the often-fragmented forces of loyalty can produce a healthier bottom line and improve the commitment among key stakeholders within a managed care environment. A brief loyalty-based leadership practices survey is included to serve as a catalyst for leaders and their teams to strategically discuss loyalty and retention in their organization.

  8. Optimal execution with price impact under Cumulative Prospect Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Jingdong; Zhu, Hongliang; Li, Xindan

    2018-01-01

    Optimal execution of a stock (or portfolio) has been widely studied in academia and in practice over the past decade, and minimizing transaction costs is a critical point. However, few researchers consider the psychological factors for the traders. What are traders truly concerned with - buying low in the paper accounts or buying lower compared to others? We consider the optimal trading strategies in terms of the price impact and Cumulative Prospect Theory and identify some specific properties. Our analyses indicate that a large proportion of the execution volume is distributed at both ends of the transaction time. But the trader's optimal strategies may not be implemented at the same transaction size and speed in different market environments.

  9. cFE/CFS (Core Flight Executive/Core Flight System)

    NASA Technical Reports Server (NTRS)

    Wildermann, Charles P.

    2008-01-01

    This viewgraph presentation describes in detail the requirements and goals of the Core Flight Executive (cFE) and the Core Flight System (CFS). The Core Flight Software System is a mission independent, platform-independent, Flight Software (FSW) environment integrating a reusable core flight executive (cFE). The CFS goals include: 1) Reduce time to deploy high quality flight software; 2) Reduce project schedule and cost uncertainty; 3) Directly facilitate formalized software reuse; 4) Enable collaboration across organizations; 5) Simplify sustaining engineering (AKA. FSW maintenance); 6) Scale from small instruments to System of Systems; 7) Platform for advanced concepts and prototyping; and 7) Common standards and tools across the branch and NASA wide.

  10. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    PubMed

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  11. Exploring the Lived Experiences of Program Managers Regarding an Automated Logistics Environment

    ERIC Educational Resources Information Center

    Allen, Ronald Timothy

    2014-01-01

    Automated Logistics Environment (ALE) is a new term used by Navy and aerospace industry executives to describe the aggregate of logistics-related information systems that support modern aircraft weapon systems. The development of logistics information systems is not always well coordinated among programs, often resulting in solutions that cannot…

  12. Secure Cooperative Data Access in Multi-Cloud Environment

    ERIC Educational Resources Information Center

    Le, Meixing

    2013-01-01

    In this dissertation, we discuss the problem of enabling cooperative query execution in a multi-cloud environment where the data is owned and managed by multiple enterprises. Each enterprise maintains its own relational database using a private cloud. In order to implement desired business services, parties need to share selected portion of their…

  13. Barriers Experienced by Male Office Management Students in a Traditionally Nonmale Environment: A Comparative Study

    ERIC Educational Resources Information Center

    Ferreira, E.; van Antwerpen, S.

    2012-01-01

    Males are still underrepresented in the office management environment and this article pertains to the tendency to discriminate against men students studying towards administrative and office-related qualifications. The purpose of the study was to determine whether the perceptions (regarding various barriers in executing their studies) of male…

  14. Utility functions and resource management in an oversubscribed heterogeneous computing environment

    DOE PAGES

    Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...

    2014-09-26

    We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less

  15. Marine light attack helicopter close air support trainer for situation awareness

    DTIC Science & Technology

    2017-06-01

    environmental elements outside the aircraft. The initial environment elements included in the trainer are those relating directly to the CAS execution...ambient environmental elements. These elements were limited the few items required to create a virtual environment . The terrain is simulated to...words) In today’s dynamic combat environment , the importance of Close Air Support (CAS) has increased significantly due to a greater need to avoid

  16. Negotiating for more than a slice of the pie.

    PubMed

    Blair, J D; Savage, G T; Whitehead, C I; Dymond, S B

    1991-01-01

    Negotiation is an important way for physician executives to manage conflict and to accomplish new projects. Because of the rapidly changing nature of the health care environment, as well as conflicts and politics within their organizations, managers need to effectively negotiate with a wide range of other parties. Managers should consider the relative importance of both the substantive and relationship outcomes of any potential negotiation. These two factors may guide the executive's selection of initial negotiation strategies.

  17. Dopamine and the Development of Executive Dysfunction in Autism Spectrum Disorders

    PubMed Central

    Kriete, Trenton; Noelle, David C.

    2015-01-01

    Persons with autism regularly exhibit executive dysfunction (ED), including problems with deliberate goal-directed behavior, planning, and flexible responding in changing environments. Indeed, this array of deficits is sufficiently prominent to have prompted a theory that executive dysfunction is at the heart of these disorders. A more detailed examination of these behaviors reveals, however, that some aspects of executive function remain developmentaly appropriate. In particular, while people with autism often have difficulty with tasks requiring cognitive flexibility, their fundamental cognitive control capabilities, such as those involved in inhibiting an inappropriate but relatively automatic response, show no significant impairment on many tasks. In this article, an existing computational model of the prefrontal cortex and its role in executive control is shown to explain this dichotomous pattern of behavior by positing abnormalities in the dopamine-based modulation of frontal systems in individuals with autism. This model offers excellent qualitative and quantitative fits to performance on standard tests of cognitive control and cognitive flexibility in this clinical population. By simulating the development of the prefrontal cortex, the computational model also offers a potential explanation for an observed lack of executive dysfunction early in life. PMID:25811610

  18. Dopamine and the development of executive dysfunction in autism spectrum disorders.

    PubMed

    Kriete, Trenton; Noelle, David C

    2015-01-01

    Persons with autism regularly exhibit executive dysfunction (ED), including problems with deliberate goal-directed behavior, planning, and flexible responding in changing environments. Indeed, this array of deficits is sufficiently prominent to have prompted a theory that executive dysfunction is at the heart of these disorders. A more detailed examination of these behaviors reveals, however, that some aspects of executive function remain developmentaly appropriate. In particular, while people with autism often have difficulty with tasks requiring cognitive flexibility, their fundamental cognitive control capabilities, such as those involved in inhibiting an inappropriate but relatively automatic response, show no significant impairment on many tasks. In this article, an existing computational model of the prefrontal cortex and its role in executive control is shown to explain this dichotomous pattern of behavior by positing abnormalities in the dopamine-based modulation of frontal systems in individuals with autism. This model offers excellent qualitative and quantitative fits to performance on standard tests of cognitive control and cognitive flexibility in this clinical population. By simulating the development of the prefrontal cortex, the computational model also offers a potential explanation for an observed lack of executive dysfunction early in life.

  19. Intelligent sensor and controller framework for the power grid

    DOEpatents

    Akyol, Bora A.; Haack, Jereme Nathan; Craig, Jr., Philip Allen; Tews, Cody William; Kulkarni, Anand V.; Carpenter, Brandon J.; Maiden, Wendy M.; Ciraci, Selim

    2015-07-28

    Disclosed below are representative embodiments of methods, apparatus, and systems for monitoring and using data in an electric power grid. For example, one disclosed embodiment comprises a sensor for measuring an electrical characteristic of a power line, electrical generator, or electrical device; a network interface; a processor; and one or more computer-readable storage media storing computer-executable instructions. In this embodiment, the computer-executable instructions include instructions for implementing an authorization and authentication module for validating a software agent received at the network interface; instructions for implementing one or more agent execution environments for executing agent code that is included with the software agent and that causes data from the sensor to be collected; and instructions for implementing an agent packaging and instantiation module for storing the collected data in a data container of the software agent and for transmitting the software agent, along with the stored data, to a next destination.

  20. Intelligent sensor and controller framework for the power grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akyol, Bora A.; Haack, Jereme Nathan; Craig, Jr., Philip Allen

    Disclosed below are representative embodiments of methods, apparatus, and systems for monitoring and using data in an electric power grid. For example, one disclosed embodiment comprises a sensor for measuring an electrical characteristic of a power line, electrical generator, or electrical device; a network interface; a processor; and one or more computer-readable storage media storing computer-executable instructions. In this embodiment, the computer-executable instructions include instructions for implementing an authorization and authentication module for validating a software agent received at the network interface; instructions for implementing one or more agent execution environments for executing agent code that is included with themore » software agent and that causes data from the sensor to be collected; and instructions for implementing an agent packaging and instantiation module for storing the collected data in a data container of the software agent and for transmitting the software agent, along with the stored data, to a next destination.« less

  1. Influences on corporate executive decision behavior in government acquisitions

    NASA Technical Reports Server (NTRS)

    Wetherington, J. R.

    1986-01-01

    This paper presents extensive exploratory research which had as its primary objective, the discovery and determination of major areas of concern exhibited by U.S. corporate executives in the preparation and submittal of proposals and bids to the Federal government. The existence of numerous unique concerns inherent in corporate strategies within the government market environment was established. A determination of the relationship of these concerns to each other was accomplished utilizing statistical factor analysis techniques resulting in the identification of major groupings of management concerns. Finally, using analysis of variance, an analysis and discovery of the interrelationship of the factors to corporate demographics was accomplished. The existence of separate and distinct concerns exhibited by corporate executives when contemplating sales and operations in the government marketplace was established. It was also demonstrated that quantifiable relationships exist between such variables and that the decision behavior exhibited by the responsible executives has an interrelationship to their company's demographics.

  2. Predicting Operator Execution Times Using CogTool

    NASA Technical Reports Server (NTRS)

    Santiago-Espada, Yamira; Latorella, Kara A.

    2013-01-01

    Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.

  3. Route Generation for a Synthetic Character (BOT) Using a Partial or Incomplete Knowledge Route Generation Algorithm in UT2004 Virtual Environment

    NASA Technical Reports Server (NTRS)

    Hanold, Gregg T.; Hanold, David T.

    2010-01-01

    This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.

  4. The Methodology for Developing Mobile Agent Application for Ubiquitous Environment

    NASA Astrophysics Data System (ADS)

    Matsuzaki, Kazutaka; Yoshioka, Nobukazu; Honiden, Shinichi

    A methodology which enables a flexible and reusable development of mobile agent application to a mobility aware indoor environment is provided in this study. The methodology is named Workflow-awareness model based on a concept of a pair of mobile agents cooperating to perform a given task. A monolithic mobile agent application with numerous concerns in a mobility aware setting is divided into a master agent (MA) and a shadow agent (SA) according to a type of tasks. The MA executes a main application logic which includes monitoring a user's physical movement and coordinating various services. The SA performs additional tasks depending on environments to aid the MA in achieving efficient execution without losing application logic. "Workflow-awareness (WFA)" means that the SA knows the MA's execution state transition so that the SA can provide a proper task at a proper timing. A prototype implementation of the methodology is done with a practical use of AspectJ. AspectJ is used to automate WFA by weaving communication modules to both MA and SA. Usefulness of this methodology concerning its efficiency and software engineering aspects are analyzed. As for the effectiveness, the overhead of WFA is relatively small to the whole expenditure time. And from the view of the software engineering, WFA is possible to provide a mechanism to deploy one application in various situations.

  5. Telerobot local-remote control architecture for space flight program applications

    NASA Technical Reports Server (NTRS)

    Zimmerman, Wayne; Backes, Paul; Steele, Robert; Long, Mark; Bon, Bruce; Beahan, John

    1993-01-01

    The JPL Supervisory Telerobotics (STELER) Laboratory has developed and demonstrated a unique local-remote robot control architecture which enables management of intermittent communication bus latencies and delays such as those expected for ground-remote operation of Space Station robotic systems via the Tracking and Data Relay Satellite System (TDRSS) communication platform. The current work at JPL in this area has focused on enhancing the technologies and transferring the control architecture to hardware and software environments which are more compatible with projected ground and space operational environments. At the local site, the operator updates the remote worksite model using stereo video and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. This capability runs on a single Silicon Graphics Inc. machine. The operator can employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the intended object. The remote site controller, called the Modular Telerobot Task Execution System (MOTES), runs in a multi-processor VME environment and performs the task sequencing, task execution, trajectory generation, closed loop force/torque control, task parameter monitoring, and reflex action. This paper describes the new STELER architecture implementation, and also documents the results of the recent autonomous docking task execution using the local site and MOTES.

  6. Crystal Growth and Other Materials Physical Researches in Space Environment

    NASA Astrophysics Data System (ADS)

    Pan, Mingxiang

    Material science researches in space environment are based on reducing the effects of buoyancy driven transport, the effects of atomic oxygen, radiation, extremes of heat and cold and the ultrahigh vacuum, so as to unveil the underlying fundamental phenomena, lead maybe to new potential materials or new industrial processes and develop space techniques. Currently, research program on materials sciences in Chinese Manned Space Engineering (CMSE) is going on. More than ten projects related to crystal growth and materials processes are selected as candidates to be executed in Shenzhou spacecraft, Tiangong Space Laboratory and Chinese Space Station. In this talk, we will present some examples of the projects, which are being prepared and executed in the near future flight tasks. They are both basic and applied research, from discovery to technology.

  7. Development of GEM gas detectors for X-ray crystal spectrometry

    NASA Astrophysics Data System (ADS)

    Chernyshova, M.; Czarski, T.; Dominik, W.; Jakubowska, K.; Rzadkiewicz, J.; Scholz, M.; Pozniak, K.; Kasprowicz, G.; Zabolotny, W.

    2014-03-01

    Two Triple Gas Electron Multiplier (Triple-GEM) detectors were developed for high-resolution X-ray spectroscopy measurements for tokamak plasma to serve as plasma evolution monitoring in soft X-ray region (SXR). They provide energy resolved fast dynamic plasma radiation imaging in the SXR with 0.1 kHz frequency. Detectors were designed and constructed for continuous data-flow precise energy and position measurement of plasma radiation emitted by metal impurities, W46+ and Ni26+ ions, at 2.4 keV and 7.8 keV photon energies, respectively. High counting rate capability of the detecting units has been achieved with good position resolution. This article presents results of the laboratory and tokamak experiments together with the system performance under irradiation by photon flux from the plasma core.

  8. Metalevel programming in robotics: Some issues

    NASA Technical Reports Server (NTRS)

    Kumarn, A.; Parameswaran, N.

    1987-01-01

    Computing in robotics has two important requirements: efficiency and flexibility. Algorithms for robot actions are implemented usually in procedural languages such as VAL and AL. But, since their excessive bindings create inflexible structures of computation, it is proposed that Logic Programming is a more suitable language for robot programming due to its non-determinism, declarative nature, and provision for metalevel programming. Logic Programming, however, results in inefficient computations. As a solution to this problem, researchers discuss a framework in which controls can be described to improve efficiency. They have divided controls into: (1) in-code and (2) metalevel and discussed them with reference to selection of rules and dataflow. Researchers illustrated the merit of Logic Programming by modelling the motion of a robot from one point to another avoiding obstacles.

  9. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.

    1993-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.

  10. Dataflow computing approach in high-speed digital simulation

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Karplus, W. J.

    1984-01-01

    New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.

  11. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.

    1995-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.

  12. Jagged Tiling for Intra-tile Parallelism and Fine-Grain Multithreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Manzano Franco, Joseph B.; Marquez, Andres

    In this paper, we have developed a novel methodology that takes into consideration multithreaded many-core designs to better utilize memory/processing resources and improve memory residence on tileable applications. It takes advantage of polyhedral analysis and transformation in the form of PLUTO, combined with a highly optimized finegrain tile runtime to exploit parallelism at all levels. The main contributions of this paper include the introduction of multi-hierarchical tiling techniques that increases intra tile parallelism; and a data-flow inspired runtime library that allows the expression of parallel tiles with an efficient synchronization registry. Our current implementation shows performance improvements on an Intelmore » Xeon Phi board up to 32.25% against instances produced by state-of-the-art compiler frameworks for selected stencil applications.« less

  13. Higher cortisol is associated with poorer executive functioning in preschool children: The role of parenting stress, parent coping and quality of daycare

    PubMed Central

    Wagner, Shannon L.; Cepeda, Ivan; Krieger, Dena; Maggi, Stefania; D’Angiulli, Amedeo; Weinberg, Joanne; Grunau, Ruth E.

    2016-01-01

    Child executive functions (cognitive flexibility, inhibitory control, working memory) are key to success in school. Cortisol, the primary stress hormone, is known to affect cognition; however, there is limited information about how child cortisol levels, parenting factors and child care context relate to executive functions in young children. The aim of this study was to examine relationships between child cortisol, parenting stress, parent coping, and daycare quality in relation to executive functions in children aged 3–5 years. We hypothesized that (1) poorer executive functioning would be related to higher child cortisol and higher parenting stress, and (2) positive daycare quality and positive parent coping style would buffer the effects of child cortisol and parenting stress on executive functions. A total of 101 children (53 girls, 48 boys, mean age 4.24 years ±0.74) with complete data on all measures were included. Three saliva samples to measure cortisol were collected at the child’s daycare/preschool in one morning. Parents completed the Behavior Rating Inventory of Executive Function – Preschool Version (BRIEF-P), Parenting Stress Index (PSI), and Ways of Coping Questionnaire (WCQ). The Early Childhood Environment Rating Scale – Revised (ECERS-R) was used to measure the quality of daycare. It was found that children with poorer executive functioning had higher levels of salivary cortisol, and their parents reported higher parenting stress. However, parent coping style and quality of daycare did not modulate these relationships. Identifying ways to promote child executive functioning is an important direction for improving school readiness. PMID:26335047

  14. [Formula: see text]Higher cortisol is associated with poorer executive functioning in preschool children: The role of parenting stress, parent coping and quality of daycare.

    PubMed

    Wagner, Shannon L; Cepeda, Ivan; Krieger, Dena; Maggi, Stefania; D'Angiulli, Amedeo; Weinberg, Joanne; Grunau, Ruth E

    2016-01-01

    Child executive functions (cognitive flexibility, inhibitory control, working memory) are key to success in school. Cortisol, the primary stress hormone, is known to affect cognition; however, there is limited information about how child cortisol levels, parenting factors and child care context relate to executive functions in young children. The aim of this study was to examine relationships between child cortisol, parenting stress, parent coping, and daycare quality in relation to executive functions in children aged 3-5 years. We hypothesized that (1) poorer executive functioning would be related to higher child cortisol and higher parenting stress, and (2) positive daycare quality and positive parent coping style would buffer the effects of child cortisol and parenting stress on executive functions. A total of 101 children (53 girls, 48 boys, mean age 4.24 years ±0.74) with complete data on all measures were included. Three saliva samples to measure cortisol were collected at the child's daycare/preschool in one morning. Parents completed the Behavior Rating Inventory of Executive Function - Preschool Version (BRIEF-P), Parenting Stress Index (PSI), and Ways of Coping Questionnaire (WCQ). The Early Childhood Environment Rating Scale - Revised (ECERS-R) was used to measure the quality of daycare. It was found that children with poorer executive functioning had higher levels of salivary cortisol, and their parents reported higher parenting stress. However, parent coping style and quality of daycare did not modulate these relationships. Identifying ways to promote child executive functioning is an important direction for improving school readiness.

  15. It's All About the Data: Workflow Systems and Weather

    NASA Astrophysics Data System (ADS)

    Plale, B.

    2009-05-01

    Digital data is fueling new advances in the computational sciences, particularly geospatial research as environmental sensing grows more practical through reduced technology costs, broader network coverage, and better instruments. e-Science research (i.e., cyberinfrastructure research) has responded to data intensive computing with tools, systems, and frameworks that support computationally oriented activities such as modeling, analysis, and data mining. Workflow systems support execution of sequences of tasks on behalf of a scientist. These systems, such as Taverna, Apache ODE, and Kepler, when built as part of a larger cyberinfrastructure framework, give the scientist tools to construct task graphs of execution sequences, often through a visual interface for connecting task boxes together with arcs representing control flow or data flow. Unlike business processing workflows, scientific workflows expose a high degree of detail and control during configuration and execution. Data-driven science imposes unique needs on workflow frameworks. Our research is focused on two issues. The first is the support for workflow-driven analysis over all kinds of data sets, including real time streaming data and locally owned and hosted data. The second is the essential role metadata/provenance collection plays in data driven science, for discovery, determining quality, for science reproducibility, and for long-term preservation. The research has been conducted over the last 6 years in the context of cyberinfrastructure for mesoscale weather research carried out as part of the Linked Environments for Atmospheric Discovery (LEAD) project. LEAD has pioneered new approaches for integrating complex weather data, assimilation, modeling, mining, and cyberinfrastructure systems. Workflow systems have the potential to generate huge volumes of data. Without some form of automated metadata capture, either metadata description becomes largely a manual task that is difficult if not impossible under high-volume conditions, or the searchability and manageability of the resulting data products is disappointingly low. The provenance of a data product is a record of its lineage, or trace of the execution history that resulted in the product. The provenance of a forecast model result, e.g., captures information about the executable version of the model, configuration parameters, input data products, execution environment, and owner. Provenance enables data to be properly attributed and captures critical parameters about the model run so the quality of the result can be ascertained. Proper provenance is essential to providing reproducible scientific computing results. Workflow languages used in science discovery are complete programming languages, and in theory can support any logic expressible by a programming language. The execution environments supporting the workflow engines, on the other hand, are subject to constraints on physical resources, and hence in practice the workflow task graphs used in science utilize relatively few of the cataloged workflow patterns. It is important to note that these workflows are executed on demand, and are executed once. Into this context is introduced the need for science discovery that is responsive to real time information. If we can use simple programming models and abstractions to make scientific discovery involving real-time data accessible to specialists who share and utilize data across scientific domains, we bring science one step closer to solving the largest of human problems.

  16. The Exposure Advantage: Early Exposure to a Multilingual Environment Promotes Effective Communication.

    PubMed

    Fan, Samantha P; Liberman, Zoe; Keysar, Boaz; Kinzler, Katherine D

    2015-07-01

    Early language exposure is essential to developing a formal language system, but may not be sufficient for communicating effectively. To understand a speaker's intention, one must take the speaker's perspective. Multilingual exposure may promote effective communication by enhancing perspective taking. We tested children on a task that required perspective taking to interpret a speaker's intended meaning. Monolingual children failed to interpret the speaker's meaning dramatically more often than both bilingual children and children who were exposed to a multilingual environment but were not bilingual themselves. Children who were merely exposed to a second language performed as well as bilingual children, despite having lower executive-function scores. Thus, the communicative advantages demonstrated by the bilinguals may be social in origin, and not due to enhanced executive control. For millennia, multilingual exposure has been the norm. Our study shows that such an environment may facilitate the development of perspective-taking tools that are critical for effective communication. © The Author(s) 2015.

  17. Definition and testing of the hydrologic component of the pilot land data system

    NASA Technical Reports Server (NTRS)

    Ragan, Robert M.; Sircar, Jayanta K.

    1987-01-01

    The specific aim was to develop within the Pilot Land Data System (PLDS) software design environment, an easily implementable and user friendly geometric correction procedure to readily enable the georeferencing of imagery data from the Advanced Very High Resolution Radiometer (AVHRR) onboard the NOAA series spacecraft. A software subsystem was developed within the guidelines set by the PLDS development environment utilizing NASA Goddard Space Flight Center (GSFC) Image Analysis Facility's (IAF's) Land Analysis Software (LAS) coding standards. The IAS current program development environment, the Transportable Applications Executive (TAE), operates under a VAX VMS operating system and was used as the user interface. A brief overview of the ICARUS algorithm that was implemented in the set of functions developed, is provided. The functional specifications decription is provided, and a list of the individual programs and directory names containing the source and executables installed in the IAF system are listed. A user guide is provided for the LAS system documentation format for the three functions developed.

  18. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  19. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    PubMed Central

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-01-01

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753

  20. Integrating planning and reactive control

    NASA Technical Reports Server (NTRS)

    Wilkins, David E.; Myers, Karen L.

    1994-01-01

    Our research is developing persistent agents that can achieve complex tasks in dynamic and uncertain environments. We refer to such agents as taskable, reactive agents. An agent of this type requires a number of capabilities. The ability to execute complex tasks necessitates the use of strategic plans for accomplishing tasks; hence, the agent must be able to synthesize new plans at run time. The dynamic nature of the environment requires that the agent be able to deal with unpredictable changes in its world. As such, agents must be able to react to unanticipated events by taking appropriate actions in a timely manner, while continuing activities that support current goals. The unpredictability of the world could lead to failure of plans generated for individual tasks. Agents must have the ability to recover from failures by adapting their activities to the new situation, or replanning if the world changes sufficiently. Finally, the agent should be able to perform in the face of uncertainty. The Cypress system, described here, provides a framework for creating taskable, reactive agents. Several features distinguish our approach: (1) the generation and execution of complex plans with parallel actions; (2) the integration of goal-driven and event driven activities during execution; (3) the use of evidential reasoning for dealing with uncertainty; and (4) the use of replanning to handle run-time execution problems. Our model for a taskable, reactive agent has two main intelligent components, an executor and a planner. The two components share a library of possible actions that the system can take. The library encompasses a full range of action representations, including plans, planning operators, and executable procedures such as predefined standard operating procedures (SOP's). These three classes of actions span multiple levels of abstraction.

  1. Integrating planning and reactive control

    NASA Astrophysics Data System (ADS)

    Wilkins, David E.; Myers, Karen L.

    1994-10-01

    Our research is developing persistent agents that can achieve complex tasks in dynamic and uncertain environments. We refer to such agents as taskable, reactive agents. An agent of this type requires a number of capabilities. The ability to execute complex tasks necessitates the use of strategic plans for accomplishing tasks; hence, the agent must be able to synthesize new plans at run time. The dynamic nature of the environment requires that the agent be able to deal with unpredictable changes in its world. As such, agents must be able to react to unanticipated events by taking appropriate actions in a timely manner, while continuing activities that support current goals. The unpredictability of the world could lead to failure of plans generated for individual tasks. Agents must have the ability to recover from failures by adapting their activities to the new situation, or replanning if the world changes sufficiently. Finally, the agent should be able to perform in the face of uncertainty. The Cypress system, described here, provides a framework for creating taskable, reactive agents. Several features distinguish our approach: (1) the generation and execution of complex plans with parallel actions; (2) the integration of goal-driven and event driven activities during execution; (3) the use of evidential reasoning for dealing with uncertainty; and (4) the use of replanning to handle run-time execution problems. Our model for a taskable, reactive agent has two main intelligent components, an executor and a planner. The two components share a library of possible actions that the system can take. The library encompasses a full range of action representations, including plans, planning operators, and executable procedures such as predefined standard operating procedures (SOP's). These three classes of actions span multiple levels of abstraction.

  2. Centralized Command, Distributed Control, and Decentralized Execution - a Command and Control Solution to US Air Force A2/AD Challenges

    DTIC Science & Technology

    2017-04-28

    Regional Air Component Commander (the Leader) 5 CC-DC- DE Solution to A2/AD – Distributed Theater Air Control System (the System) 9 CC-DC- DE ... Control , Decentralized Execution” to a new framework of “Centralized Command, Distributed Control , and Decentralized Execution” (CC-DC- DE ).4 5 This...USAF C2 challenges in A2/AD environments describes a three-part Centralized Command, Distributed Control , and Decentralized Execution (CC-DC- DE

  3. What Role Does The Executive Officer Play In Ensuring Senior Officer Success Building An Organization Of Trust Is Key

    DTIC Science & Technology

    2016-02-16

    TRUST IS KEY BY Robert F. King, Lt Col, USAF A Research Report Submitted to the Faculty In Partial Fulfillment of the Graduation Requirements... trust is required for organizations to be highly efficient with high morale. It is incumbent upon the senior leader to envision and take steps toward...a leadership environment of trust , but because the executive officer sits at the nexus of crucial trust relationships and is often the “face” of the

  4. Children's Environmental Health: 2007 Highlights. Environment, Health, and a Focus on Children

    ERIC Educational Resources Information Center

    US Environmental Protection Agency, 2007

    2007-01-01

    The U.S. Environmental Protection Agency (EPA) was created in 1970 to protect human health and the environment. The year 2007 marks 10 years of concerted Federal effort to address children's environmental health risks as mandated by Executive Order 13045, Protection of Children from Environmental Health Risks and Safety Risks. Much of the agency's…

  5. Institutional Change in a Higher Education Environment: Factors in the Adoption and Sustainability of Information Technology Project Management Best Practices

    ERIC Educational Resources Information Center

    LeTourneau, John

    2012-01-01

    The public higher education economic and competitive environments make it crucial that organizations react to the circumstances and make better use of available resources (Duderstadt, 2000; Floyd, 2008; Shulman, 2007; State Higher Education Executive Officers (SHEEO), 2009). Viewing higher education through the perspective of new institutionalism…

  6. Making Sense of the University Environment in Post-Apartheid South Africa: Administrators in the Executive Management Team

    ERIC Educational Resources Information Center

    Dominguez-Whitehead, Yasmine

    2010-01-01

    Higher education in post-apartheid South Africa has experienced a relatively rapid changing landscape (Cloete, Maassen, Fehnel, & Moja, 2006). As such, the organizational environment in which university administrators operate is an increasingly important area of study. This study is grounded in organizational theory and adopts an open systems…

  7. Working Together for a Healthy Environment: A Guide for Multi-Cultural Community Groups

    ERIC Educational Resources Information Center

    US Environmental Protection Agency, 2007

    2007-01-01

    As a community-based organization, community leader or activist, individuals are in a unique position to take the lead in raising awareness about resource conservation, good solid waste management, and safeguarding the environment for future generations. This paper is designed to help individuals plan and execute community events that promote the…

  8. Telemetric Technologies for the Assay of Gene Expression

    NASA Astrophysics Data System (ADS)

    Paul, Anna-Lisa; Bamsey, Matthew; Berinstain, Alain; Neron, Philip; Graham, Thomas; Ferl, Robert

    Telemetric data collection has been widely used in spaceflight applications where human participation is limited (orbital mission payloads) or unfeasible (planetary landers, satellites, and probes). The transmission of digital data from electronic sensors of typical environmental parameters, growth patterns and physical properties of materials is routine telemetry, and even the collection and transmission of deep space images is a standard tool of astrophysics. But telemetric imaging for current biological payloads has thus far been limited to the collection of standard white-light photography that is largely confined to reporting the surface characteristics of the specimens involved. Advances in imaging technologies that facilitate the collection of a variety of light wavelengths will expand the science return on biological payloads to include evaluations of the molecular genetic response of organisms to the spaceflight or extraterrestrial environment, with minimal or no human intervention. Advanced imaging technology in combination with biologically engineered sensor organisms can create a system that can report via telemetry on the patterns of gene expression required to adapt to a novel environment. The utilization of genetically engineered plants as biosensors has made elegant strides in the recent years, providing keen insights into the health of plants in general and particularly in the nature and cellular location of stress responses. Moreover, molecular responses to gravitational vectors have been elegantly analyzed with fluorescent tools. Green Fluorescence Protein (GFP) and other fluorophores have made it possible for analyses of gene expression and biological responses to occur telemetrically, with the information potentially delivered to the investigator over large distances as simple, preprocessed fluorescence images. Having previously deployed transgenic plant biosensors to evaluate responses to orbital spaceflight, we wish to develop both the plants and the imaging devices required to conduct such fluorescence imaging experiments robotically, without direct operator intervention, within the operational constraints of extraterrestrial environments. This requires the development of an autonomous and remotely operated plant fluorescence imaging system and concomitant development of the infrastructure to manage dataflow. Here we report the results of the deployment of our spaceflight prototype GFP imaging system within the Arthur Clarke Mars Greenhouse (ACMG), an autonomously operated greenhouse located within the Haughton Mars Project in the High Canadian Arctic (75° 22'N Latitude: 89° 41'W Longitude). Results demonstrate both the applicability of the fundamental GFP biosensor technology and highlight the difficulties in collecting and managing telemetric data from challenging deployment environments.

  9. 5 CFR 3801.103 - Designation of separate Departmental components.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF ETHICAL CONDUCT FOR EMPLOYEES OF THE DEPARTMENT OF JUSTICE § 3801.103 Designation of separate... Enforcement Administration Environment and Natural Resources Division Executive Office for Immigration Review...

  10. IpexT: Integrated Planning and Execution for Military Satellite Tele-Communications

    NASA Technical Reports Server (NTRS)

    Plaunt, Christian; Rajan, Kanna

    2004-01-01

    The next generation of military communications satellites may be designed as a fast packet-switched constellation of spacecraft able to withstand substantial bandwidth capacity fluctuation in the face of dynamic resource utilization and rapid environmental changes including jamming of communication frequencies and unstable weather phenomena. We are in the process of designing an integrated scheduling and execution tool which will aid in the analysis of the design parameters needed for building such a distributed system for nominal and battlefield communications. This paper discusses the design of such a system based on a temporal constraint posting planner/scheduler and a smart executive which can cope with a dynamic environment to make a more optimal utilization of bandwidth than the current circuit switched based approach.

  11. Buildings, Barriers, and Breakthroughs: Bridging Gaps in the Health Care Enterprise.

    PubMed

    Kaelin, Karla; Okland, Kathy

    Health care architecture and design are critical resources that are often underestimated and overlooked. As we seek to extract every available resource at our disposal to serve patients and sustain the bottom line, it is vital that we consider the influence the building imposes on the patient and caregiver experiences. Buildings impact both caregiver behaviors and the economic enterprise and are, therefore, the business of health care executives. This understanding is not only an executive obligation, it is an executive opportunity. Furthermore, the built environment can be a source for innovation in an industry whose future depends on nurse leaders to champion ingenuity with simplicity and relevance. Nurse leaders are ideally positioned to bridge health care building design and best practice.

  12. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms

    PubMed Central

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2017-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237

  13. 48 CFR 223.7302 - Authorities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... OF DEFENSE SOCIOECONOMIC PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY... Federal Environmental, Energy, and Transportation Management. (b) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy, and Economic Performance. ...

  14. 48 CFR 223.7302 - Authorities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... OF DEFENSE SOCIOECONOMIC PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY... Federal Environmental, Energy, and Transportation Management. (b) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy, and Economic Performance. ...

  15. 48 CFR 970.0470-1 - General.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... list of applicable requirements and providing it to the contracting officer for inclusion in the..., Integration of Environment, Safety, and Health into Work Planning and Execution. When such a process is used...

  16. Boys have not caught up, family influences still continue: Influences on executive functioning and behavioral self-regulation in elementary students in Germany.

    PubMed

    Gunzenhauser, Catherine; Saalbach, Henrik; von Suchodoletz, Antje

    2017-09-01

    The development of self-regulation is influenced by various child-level and family-level characteristics. Previous research focusing on the preschool period reported a female advantage in self-regulation and negative effects of various adverse features of the family environment on self-regulation. The present study aimed to investigate growth in self-regulation (i.e., executive functioning and behavioral self-regulation) over 1 school year during early elementary school and to explore the influences of child sex, the level of home chaos, and family educational resources on self-regulation. Participants were 263 German children (51% girls; mean age 8.59 years, SD = 0.56 years). Data were collected during the fall and spring of the school year. A computer-based standardized test battery was used to assess executive functioning. Caregiver ratings assessed children's behavioral self-regulation and information on the family's home environment (chaotic home environment and educational resources). Results suggest growth in elementary school children's executive functioning over the course of the school year. However, there were no significant changes in children's behavioral self-regulation between the beginning and the end of Grade 3. Sex differences in inhibitory control/cognitive flexibility and behavioral self-regulation were found, suggesting an advantage for girls. Educational resources in the family but not chaotic family environment were significantly related to self-regulation at both time-points. Children from families with more educational resources scored higher on self-regulation measures compared to their counterparts from less advantaged families. We did not find evidence for child-level or family-level characteristics predicting self-regulation growth over time. Findings add to the evidence of a gender gap in self-regulation skills, but suggest that it might not further widen towards the end of elementary school age. Adequate self-regulation skills should be fostered in both girls and boys. Results also add to the importance of supporting self-regulation development in children from disadvantaged family backgrounds early in life. © 2017 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  17. Probabilistic durability assessment of concrete structures in marine environments: Reliability and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Ning, Chao-lie; Li, Bing

    2017-03-01

    A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.

  18. A method of demand-driven and data-centric Web service configuration for flexible business process implementation

    NASA Astrophysics Data System (ADS)

    Xu, Boyi; Xu, Li Da; Fei, Xiang; Jiang, Lihong; Cai, Hongming; Wang, Shuai

    2017-08-01

    Facing the rapidly changing business environments, implementation of flexible business process is crucial, but difficult especially in data-intensive application areas. This study aims to provide scalable and easily accessible information resources to leverage business process management. In this article, with a resource-oriented approach, enterprise data resources are represented as data-centric Web services, grouped on-demand of business requirement and configured dynamically to adapt to changing business processes. First, a configurable architecture CIRPA involving information resource pool is proposed to act as a scalable and dynamic platform to virtualise enterprise information resources as data-centric Web services. By exposing data-centric resources as REST services in larger granularities, tenant-isolated information resources could be accessed in business process execution. Second, dynamic information resource pool is designed to fulfil configurable and on-demand data accessing in business process execution. CIRPA also isolates transaction data from business process while supporting diverse business processes composition. Finally, a case study of using our method in logistics application shows that CIRPA provides an enhanced performance both in static service encapsulation and dynamic service execution in cloud computing environment.

  19. Simulation Testing of Embedded Flight Software

    NASA Technical Reports Server (NTRS)

    Shahabuddin, Mohammad; Reinholtz, William

    2004-01-01

    Virtual Real Time (VRT) is a computer program for testing embedded flight software by computational simulation in a workstation, in contradistinction to testing it in its target central processing unit (CPU). The disadvantages of testing in the target CPU include the need for an expensive test bed, the necessity for testers and programmers to take turns using the test bed, and the lack of software tools for debugging in a real-time environment. By virtue of its architecture, most of the flight software of the type in question is amenable to development and testing on workstations, for which there is an abundance of commercially available debugging and analysis software tools. Unfortunately, the timing of a workstation differs from that of a target CPU in a test bed. VRT, in conjunction with closed-loop simulation software, provides a capability for executing embedded flight software on a workstation in a close-to-real-time environment. A scale factor is used to convert between execution time in VRT on a workstation and execution on a target CPU. VRT includes high-resolution operating- system timers that enable the synchronization of flight software with simulation software and ground software, all running on different workstations.

  20. Experience with V-STORE: considerations on presence in virtual environments for effective neuropsychological rehabilitation of executive functions.

    PubMed

    Lo Priore, Corrado; Castelnuovo, Gianluca; Liccione, Diego; Liccione, Davide

    2003-06-01

    The paper discusses the use of immersive virtual reality systems for the cognitive rehabilitation of dysexecutive syndrome, usually caused by prefrontal brain injuries. With respect to classical P&P and flat-screen computer rehabilitative tools, IVR systems might prove capable of evoking a more intense and compelling sense of presence, thanks to the highly naturalistic subject-environment interaction allowed. Within a constructivist framework applied to holistic rehabilitation, we suggest that this difference might enhance the ecological validity of cognitive training, partly overcoming the implicit limits of a lab setting, which seem to affect non-immersive procedures especially when applied to dysexecutive symptoms. We tested presence in a pilot study applied to a new VR-based rehabilitation tool for executive functions, V-Store; it allows patients to explore a virtual environment where they solve six series of tasks, ordered for complexity and designed to stimulate executive functions, programming, categorical abstraction, short-term memory and attention. We compared sense of presence experienced by unskilled normal subjects, randomly assigned to immersive or non-immersive (flat screen) sessions of V-Store, through four different indexes: self-report questionnaire, psychophysiological (GSR, skin conductance), neuropsychological (incidental recall memory test related to auditory information coming from the "real" environment) and count of breaks in presence (BIPs). Preliminary results show in the immersive group a significantly higher GSR response during tasks; neuropsychological data (fewer recalled elements from "reality") and less BIPs only show a congruent but yet non-significant advantage for the immersive condition; no differences were evident from the self-report questionnaire. A larger experimental group is currently under examination to evaluate significance of these data, which also might prove interesting with respect to the question of objective-subjective measures of presence.

  1. President and Physical Plant Administrator Face Common Goal of Providing an Efficient, Effective Educational Environment.

    ERIC Educational Resources Information Center

    Hansen, Arthur G.

    1975-01-01

    Stresses faced by higher education as a result of both student and social criticism are examined and related to the specific role the physical plant department plays in the changing environment. The interface between the physical plant administrator and the chief executive officer is explored, and consideration is given to what each expects of the…

  2. Strategic Planning and Management. Report of the Annual Management Institute for College and University Executives (10th, Snowmass, Colorado, July 21-26, 1985).

    ERIC Educational Resources Information Center

    Groff, Warren H.; Cope, Robert G.

    Basic and advanced workshops on strategic planning and management for college personnel were held in 1985. Strategic planning and management includes: (1) assessing an institution's external environment to determine opportunities/threats; (2) auditing an institution's internal environment to determine strengths/weaknesses; (3) using these two sets…

  3. The Effect of a Digital Learning Environment on Children's Conceptions about the Protection of Endemic Plants

    ERIC Educational Resources Information Center

    Petrou, Stella; Korfiatis, Konstantinos

    2013-01-01

    This study presents the results of a pilot learning intervention for improving children's ideas about plant protection. The research was executed in two phases. The first phase aimed at exploring children's ideas about plant protection. These ideas were taken into account for the design and development of a digital learning environment. The second…

  4. First aid manual in an android environment.

    PubMed

    Theodoromanolakis, Panos; Zygouras, Nikolaos; Mantas, John

    2013-01-01

    The First Aid Manual constitutes a detailed guide which contains useful information and suggested acts for potential pathogenic conditions in everyday life, given in an Android environment. The aim of the project is the capability of eliciting information regarding First Aid, by means of a widespread use device, such as smartphones. For the conduction of the project a database was used, into which the information was incorporated, in order to be later reloaded into the Eclipse environment. It will there receive its final form as an executable file for android cellphones. The executable file axx.apk originated an application which, providing the user with 6 main categories (definition, epidemiological evidence, aggravating factors, symptoms, what to do, what to avoid, acts) gives them access to an easy navigation and enables them to provide first grade care, without the requirement of any previous experience. The more and more advanced needs of the modern lifestyle, combined with technological achievements have created a complex system of social fabric, having of course an effect also on the area of human accidents. Hence, First Aid information given in the environment of a mobile phone can prove to be a useful tool for anyone, in case of an accident.

  5. Working in Corporate France: A Cross-Cultural Challenge.

    ERIC Educational Resources Information Center

    Federico, Salvatore; Moore, Catherine

    1997-01-01

    Discusses the experience of an American executive working in Paris. Touches on the working environment, working hours and vacation, dress code, professional hierarchy, internal communication, benefits, and cultural attitudes. (Six references) (CK)

  6. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  7. A heterogeneous computing environment for simulating astrophysical fluid flows

    NASA Technical Reports Server (NTRS)

    Cazes, J.

    1994-01-01

    In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.

  8. Identifying impact of software dependencies on replicability of biomedical workflows.

    PubMed

    Miksa, Tomasz; Rauber, Andreas; Mina, Eleni

    2016-12-01

    Complex data driven experiments form the basis of biomedical research. Recent findings warn that the context in which the software is run, that is the infrastructure and the third party dependencies, can have a crucial impact on the final results delivered by a computational experiment. This implies that in order to replicate the same result, not only the same data must be used, but also it must be run on an equivalent software stack. In this paper we present the VFramework that enables assessing replicability of workflows. It identifies whether any differences in software dependencies among two executions of the same workflow exist and whether they have impact on the produced results. We also conduct a case study in which we investigate the impact of software dependencies on replicability of Taverna workflows used in biomedical research of Huntington's disease. We re-execute analysed workflows in environments differing in operating system distribution and configuration. The results show that the VFramework can be used to identify the impact of software dependencies on the replicability of biomedical workflows. Furthermore, we observe that despite the fact that the workflows are executed in a controlled environment, they still depend on specific tools installed in the environment. The context model used by the VFramework improves the deficiencies of provenance traces and documents also such tools. Based on our findings we define guidelines for workflow owners that enable them to improve replicability of their workflows. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Simulation environment and graphical visualization environment: a COPD use-case.

    PubMed

    Huertas-Migueláñez, Mercedes; Mora, Daniel; Cano, Isaac; Maier, Dieter; Gomez-Cabrero, David; Lluch-Ariet, Magí; Miralles, Felip

    2014-11-28

    Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios.

  10. Applications of the pipeline environment for visual informatics and genomics computations

    PubMed Central

    2011-01-01

    Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102

  11. Executive functioning deficits in young adult survivors of bronchopulmonary dysplasia.

    PubMed

    Gough, Aisling; Linden, Mark A; Spence, Dale; Halliday, Henry L; Patterson, Christopher C; McGarvey, Lorcan

    2015-01-01

    To assess long-term impairments of executive functioning in adult survivors of bronchopulmonary dysplasia (BPD). Participants were assessed on measures of executive functioning, health-related quality of life (HRQoL) and social functioning. Survivors of BPD (n = 63; 34 males; mean age 24.2 years) were compared with groups comprising preterm (without BPD) (<1500 g; n = 45) and full-term controls (n = 63). Analysis of variance was used to explore differences among groups for outcome measures. Multiple regression analyzes were performed to identify factors predictive of long-term outcomes. Significantly more BPD adults, compared with preterm and term controls, showed deficits in executive functioning relating to problem solving (OR: 5.1, CI: 1.4-19.3), awareness of behavior (OR: 12.7, CI: 1.5-106.4) and organization of their environment (OR: 13.0, CI: 1.6-107.1). Birth weight, HRQoL and social functioning were predictive of deficits in executive functioning. This study represents the largest sample of survivors into adulthood of BPD and is the first to show that deficits in executive functioning persist. Children with BPD should be assessed to identify cognitive impairments and allow early intervention aimed at ameliorating their effects. Implications for Rehabilitation Adults born preterm with very-low birth weight, and particularly those who develop BPD, are at increased risk of exhibiting defects in executive functioning. Clinicians and educators should be made aware of the impact that BPD can have on the long-term development of executive functions. Children and young adults identified as having BPD should be periodically monitored to identify the need for possible intervention.

  12. Exploring dual commitment among physician executives in managed care.

    PubMed

    Hoff, T J

    2001-01-01

    The growth of a medical management specialty is a significant event associated with managed care. Physician executives are lauded for their potential in bridging the clinical and managerial realms. They also serve as a countervailing force to help the medical profession and patients maintain a strong voice in healthcare decision making at the strategic level. However, little is known about their work loyalties. These attitudes are important to explore because they speak to whose interests physician executives consider and represent in their everyday management roles. If physician executives are to maximize their effectiveness in the healthcare workplace, both physicians and organizations must view them as credible sources of authority. This study examines organizational and professional commitment among a national sample of physician executives employed in managed care settings. Data used for the analysis come from a national survey conducted through the American College of Physician Executives in 1996. The findings support the notion that physician executives can and do express simultaneous loyalty to organizational and professional interests. This dual commitment is related to other work attitudes that contribute to success in the management role. In addition, it appears that situational factors increase the chances for dual commitment. These factors derive from a favorable work environment that includes both organizational and professional socialization in the management role. The results of the study are useful in specifying the training and socialization needs of physicians who wish to do management work. They also provide a rationale for collaboration between healthcare organizations and rank-and-file physicians aimed at cultivating physician executives who are credible leaders within the healthcare system.

  13. Video game practice optimizes executive control skills in dual-task and task switching situations.

    PubMed

    Strobach, Tilo; Frensch, Peter A; Schubert, Torsten

    2012-05-01

    We examined the relation of action video game practice and the optimization of executive control skills that are needed to coordinate two different tasks. As action video games are similar to real life situations and complex in nature, and include numerous concurrent actions, they may generate an ideal environment for practicing these skills (Green & Bavelier, 2008). For two types of experimental paradigms, dual-task and task switching respectively; we obtained performance advantages for experienced video gamers compared to non-gamers in situations in which two different tasks were processed simultaneously or sequentially. This advantage was absent in single-task situations. These findings indicate optimized executive control skills in video gamers. Similar findings in non-gamers after 15 h of action video game practice when compared to non-gamers with practice on a puzzle game clarified the causal relation between video game practice and the optimization of executive control skills. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. MARBLE: A system for executing expert systems in parallel

    NASA Technical Reports Server (NTRS)

    Myers, Leonard; Johnson, Coe; Johnson, Dean

    1990-01-01

    This paper details the MARBLE 2.0 system which provides a parallel environment for cooperating expert systems. The work has been done in conjunction with the development of an intelligent computer-aided design system, ICADS, by the CAD Research Unit of the Design Institute at California Polytechnic State University. MARBLE (Multiple Accessed Rete Blackboard Linked Experts) is a system of C Language Production Systems (CLIPS) expert system tool. A copied blackboard is used for communication between the shells to establish an architecture which supports cooperating expert systems that execute in parallel. The design of MARBLE is simple, but it provides support for a rich variety of configurations, while making it relatively easy to demonstrate the correctness of its parallel execution features. In its most elementary configuration, individual CLIPS expert systems execute on their own processors and communicate with each other through a modified blackboard. Control of the system as a whole, and specifically of writing to the blackboard is provided by one of the CLIPS expert systems, an expert control system.

  15. Multi-level manual and autonomous control superposition for intelligent telerobot

    NASA Technical Reports Server (NTRS)

    Hirai, Shigeoki; Sato, T.

    1989-01-01

    Space telerobots are recognized to require cooperation with human operators in various ways. Multi-level manual and autonomous control superposition in telerobot task execution is described. The object model, the structured master-slave manipulation system, and the motion understanding system are proposed to realize the concept. The object model offers interfaces for task level and object level human intervention. The structured master-slave manipulation system offers interfaces for motion level human intervention. The motion understanding system maintains the consistency of the knowledge through all the levels which supports the robot autonomy while accepting the human intervention. The superposing execution of the teleoperational task at multi-levels realizes intuitive and robust task execution for wide variety of objects and in changeful environment. The performance of several examples of operating chemical apparatuses is shown.

  16. Strategies for the nurse executive to keep the rural hospitals open.

    PubMed

    Shride, S E

    1997-01-01

    Rural hospitals are confronted with multiple challenges to survive in the competitive health care environment of today's world. Declining population, corporate mergers and downsizing, transportation, cost of technology, and health manpower shortages are only a few of the issues rural hospitals must be prepared to address in order to survive. Federal- and state-administered programs are available that can contribute to the survival of the rural hospital. The nurse executive has a key role in contributing to the planning, development, and implementation of survival strategies.

  17. Opportunities and strategies in contemporary health system executive leadership.

    PubMed

    McCausland, Maureen P

    2012-01-01

    The contemporary health care environment presents opportunities for nurse executive leadership that is patient and family centered, satisfying to professional nurses and their colleagues, and results in safe quality care that is fiscally responsible and evidence based. This article focuses on the strategic areas of systemness, people, performance, and innovation and offers strategies and tactics to help move nursing in integrated delivery systems from important entity-based services to a system approach where the nursing leadership team and entity chief nursing officers are recognized as major contributors to system success.

  18. Chief nursing officer turnover: chief nursing officers and healthcare recruiters tell their stories.

    PubMed

    Havens, Donna Sullivan; Thompson, Pamela A; Jones, Cheryl B

    2008-12-01

    Chief nursing officers (CNOs) develop environments in which quality patient care is delivered and nurses enjoy professional practice. Because of the growing turbulence in this vital role, the American Organization of Nurse Executives conducted a study to examine CNO turnover as described in interviews with CNOs and healthcare recruiters to inform the development of strategies to improve CNO recruitment and retention and ease transition for those who turn over. The authors present the findings from this research and describe American Organization of Nurse Executives' initiatives to address the identified needs.

  19. Association between Executive Function and Problematic Adolescent Driving

    PubMed Central

    Pope, Caitlin N.; Ross, Lesley A.; Stavrinos, Despina

    2016-01-01

    Objective Motor vehicle collisions (MVCs) are one of the leading causes of injury and death for adolescents. Driving is a complex activity that is highly reliant on executive function to safely navigate through the environment. Little research has examined the efficacy of using self-reported executive function measures for assessing adolescent driving risk. This study examined the Behavior Rating Inventory of Executive Function (BRIEF) questionnaire and performance based-executive function tasks as potential predictors of problematic driving outcomes in adolescents. Methods Forty-six adolescent drivers completed the (1) BRIEF, (2) Trail Making Test (TMT), (3) Backwards Digit Span, and (4) self-report on three problematic driving outcomes: the number of times of having been pulled over by a police officer, the number of tickets issued, and the number of MVCs. Results Greater self-reported difficulty with planning and organization was associated with greater odds of having a MVC, while inhibition difficulties were associated with greater odds of receiving a ticket. Greater self-reported difficulty across multiple BRIEF subscales was associated with greater odds of being pulled over. Conclusion Overall findings indicated that the BRIEF, an ecological measure of executive function, showed significant association with self-reported problematic driving outcomes in adolescents. No relationship was found between performance-based executive function measures and self-reported driving outcomes. The BRIEF could offer unique and quick insight into problematic driving behavior and potentially be an indicator of driving risk in adolescent drivers during clinical evaluations. PMID:27661394

  20. Secondary phase validation—Phase classification by polarization

    NASA Astrophysics Data System (ADS)

    Fedorenko, Yury V.; Matveeva, Tatiana; Beketova, Elena; Husebye, Eystein S.

    2008-07-01

    A long-standing problem in operational seismology is that of reliable focal depth estimation. Standard analyst practice is to pick and identify a 'phase' in the P-coda. This picking will always produce a depth estimate but without any validation it cannot be trusted. In this article we 'hunt' for standard depth phases like pP, sP and/or PmP but unlike the analyst we use Bayes statistics for classifying the probability that polarization characteristics of pickings belong to one of the mentioned depth phases given preliminary epicenter information. In this regard we describe a general-purpose PC implementation of the Bayesian methodology that can deal with complex nonlinear models in a flexible way. The models are represented by a data-flow diagram that may be manipulated by the analyst through a graphical-programming environment. An analytic signal representation is used with the imaginary part being the Hilbert transform of the signal itself. The pickings are in terms of a plot of posterior probabilities as a function of time for pP, Sp or PmP being within the presumed azimuth and incident angle sectors for given preliminary epicenter locations. We have tested this novel focal depth estimation procedure on explosion and earthquake recordings from Cossack Ranger II stations in Karelia, NW Russia, and with encouraging results. For example, pickings deviating more than 5° off 'true' azimuth are rejected while Pn-incident angle estimate exhibit considerable scatter. A comprehensive test of our approach is not quite easy as recordings from so-called Ground Truth events are elusive.

  1. Integration and validation testing for PhEDEx, DBS and DAS with the PhEDEx LifeCycle agent

    NASA Astrophysics Data System (ADS)

    Boeser, C.; Chwalek, T.; Giffels, M.; Kuznetsov, V.; Wildish, T.

    2014-06-01

    The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle agent provides a framework for customising the test workflow in arbitrary ways, and can scale to levels of activity well beyond those seen in normal running. This means we can run realistic performance tests at scales not likely to be seen by the experiment for some years, or with custom topologies to examine particular situations that may cause concern some time in the future. The LifeCycle agent has recently been enhanced to become a general purpose integration and validation testing tool for major CMS services. It allows cross-system integration tests of all three components to be performed in controlled environments, without interfering with production services. In this paper we discuss the design and implementation of the LifeCycle agent. We describe how it is used for small-scale debugging and validation tests, and how we extend that to large-scale tests of whole groups of sub-systems. We show how the LifeCycle agent can emulate the action of operators, physicists, or software agents external to the system under test, and how it can be scaled to large and complex systems.

  2. 48 CFR 223.7302 - Authorities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... OF DEFENSE SOCIOECONOMIC PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Minimizing the Use of Materials Containing... Federal Environmental, Energy, and Transportation Management. (b) Executive Order 13514 of October 5, 2009...

  3. 48 CFR 23.102 - Authorities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Sustainable Acquisition Policy 23.102 Authorities. (a) Executive Order 13423 of January 24, 2007, Strengthening Federal Environmental, Energy, and Transportation Management. (b...

  4. 48 CFR 23.102 - Authorities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Sustainable Acquisition Policy 23.102 Authorities. (a) Executive Order 13423 of January 24, 2007, Strengthening Federal Environmental, Energy, and Transportation Management. (b...

  5. 48 CFR 23.102 - Authorities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND... January 24, 2007, Strengthening Federal Environmental, Energy, and Transportation Management. (b) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy, and Economic...

  6. 75 FR 67956 - Meeting of the Chief of Naval Operations Executive Panel

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-04

    ... Latin America and the Caribbean, 2010 Subcommittee study. The meeting will consist of open and closed... the political, social and economic environment of Latin America and the Caribbean, focusing on crime...

  7. 48 CFR 23.1001 - Authorities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND... January 24, 2007, Strengthening Federal Environmental, Energy, and Transportation Management. (d) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy, and Economic...

  8. 48 CFR 23.102 - Authorities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND... January 24, 2007, Strengthening Federal Environmental, Energy, and Transportation Management. (b) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy, and Economic...

  9. 48 CFR 23.1001 - Authorities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND... January 24, 2007, Strengthening Federal Environmental, Energy, and Transportation Management. (d) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy, and Economic...

  10. 48 CFR 23.801 - Authorities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND... Environmental, Energy, and Transportation Management. (d) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy, and Economic Performance. (e) Environmental Protection Agency (EPA...

  11. The ATLAS Data Acquisition System: from Run 1 to Run 2

    NASA Astrophysics Data System (ADS)

    Panduro Vazquez, William; ATLAS Collaboration

    2016-04-01

    The experience gained during the first period of very successful data taking of the ATLAS experiment (Run 1) has inspired a number of ideas for improvement of the Data Acquisition (DAQ) system that are being put in place during the so-called Long Shutdown 1 of the Large Hadron Collider (LHC), in 2013/14. We have updated the data-flow architecture, rewritten an important fraction of the software and replaced hardware, profiting from state of the art technologies. This paper summarizes the main changes that have been applied to the ATLAS DAQ system and highlights the expected performance and functional improvements that will be available for the LHC Run 2. Particular emphasis will be put on explaining the reasons for our architectural and technical choices, as well as on the simulation and testing approach used to validate this system.

  12. Superfund: evaluating the impact of executive order 12898.

    PubMed

    O'Neil, Sandra George

    2007-07-01

    The U.S. Environmental Protection Agency (EPA) addresses uncontrolled and abandoned hazardous waste sites throughout the country. Sites that are perceived to be a significant threat to both surrounding populations and the environment can be placed on the U.S. EPA Superfund list and qualify for federal cleanup funds. The equitability of the Superfund program has been questioned; the representation of minority and low-income populations in this cleanup program is lower than would be expected. Thus, minorities and low-income populations may not be benefiting proportionately from this environmental cleanup program. In 1994 President Clinton signed Executive Order 12898 requiring that the U.S. EPA and other federal agencies implement environmental justice policies. These policies were to specifically address the disproportionate environmental effects of federal programs and policies on minority and low-income populations. I use event history analysis to evaluate the impact of Executive Order 12898 on the equitability of the Superfund program. Findings suggest that despite environmental justice legislation, Superfund site listings in minority and poor areas are even less likely for sites discovered since the 1994 Executive Order. The results of this study indicate that Executive Order 12898 for environmental justice has not increased the equitability of the Superfund program.

  13. Texas hospital chief executive officers evaluate content areas in health administration education.

    PubMed

    Harkins, L T; Herkimer, A G

    1995-01-01

    Health care executives are confronted by a working environment that is increasingly difficult to manage. Skyrocketing health care costs, with shrinking reimbursement, threaten the existence of hospitals. A successful hospital chief executive officer (CEO) is one who can effectively manage his/her hospital in spite of industry challenges and problems. Graduate programs in health services administration must be designed to meet the needs of future health care executives. Many times, educators are criticized for not addressing "real world" issues within the curricular structure. The present study was conducted to gather information from executives who are the experts on what to expect in the health care industry regarding the appropriateness of curricular topics. Results indicate that practicing CEOs believe those curricular areas which focus on financial planning, budgeting, medical-legal issues, and strategic planning are more important than those that deal with international health care, epidemiology, or research methods. The information gathered in this study may be useful as a guide for educators, to evaluate and revise existing graduate programs in health care administration. Data presented here may also be used to assist in long-range planning for new health administration programs.

  14. Self-Efficacy in the Context of Online Learning Environments: A Review of the Literature and Directions for Research

    ERIC Educational Resources Information Center

    Hodges, Charles B.

    2008-01-01

    The purpose of this paper is to examine the construct of self-efficacy in the context of online learning environments. Self-efficacy is defined as "beliefs in one's capabilities to organize and execute the courses of action required to produce given attainments" (Bandura, [1997], p. 3). Traditionally, the four main sources of self-efficacy…

  15. An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing

    DTIC Science & Technology

    2002-08-01

    simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital

  16. Dakota Graphical User Interface v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman-Hill, Ernest; Glickman, Matthew; Gibson, Marcus

    Graphical analysis environment for Sandia’s Dakota software for optimization and uncertainty quantification. The Dakota GUI is an interactive graphical analysis environment for creating, running, and interpreting Dakota optimization and uncertainty quantification studies. It includes problem (Dakota study) set-up, option specification, simulation interfacing, analysis execution, and results visualization. Through the use of wizards, templates, and views, Dakota GUI helps uses navigate Dakota’s complex capability landscape.

  17. Airland Battlefield Environment (ALBE) Tactical Decision Aid (TDA) Demonstration Program,

    DTIC Science & Technology

    1987-11-12

    Management System (DBMS) software, GKS graphics libraries, and user interface software. These components of the ATB system software architecture will be... knowlede base ano auqent the decision mak:n• process by providing infocr-mation useful in the formulation and execution of battlefield strategies...Topographic Laboratories as an Engineer. Ms. Capps is managing the software development of the AirLand Battlefield Environment (ALBE) geographic

  18. E3: Organizing for Environment, Energy, and the Economy in the Executive Branch of the U.S. Government.

    ERIC Educational Resources Information Center

    Carnegie Commission on Science, Technology, and Government, New York, NY.

    A Task Force created in 1989 was asked to provide the Carnegie Commission on Science, Technology, and Government with a brief statement outlining both functional needs in environment and energy and institutional forms to enhance the government's capability to address the emergent issues. One key need the Task Force has identified is for a greater…

  19. Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.

  20. An Expert Supervisor For A Robotic Work Cell

    NASA Astrophysics Data System (ADS)

    Moed, M. C.; Kelley, R. B.

    1988-02-01

    To increase task flexibility in a robotic assembly environment, a hierarchical planning and execution system is being developed which will map user specified 3D part assembly tasks into various target robotic work cells, and execute these tasks efficiently using manipulators and sensors available in the work cell. One level of this hierarchy, the Supervisor, is responsible for assigning subtasks of a system generated Task Plan to a set of task specific Specialists and on-line coordination of the activity of these Specialists to accomplish the user specified assembly. The design of the Supervisor can be broken down into five major functional blocks: resource management; concurrency detection; task scheduling; error recovery; and interprocess communication. The Supervisor implementation has been completed on a VAX 11/750 under a Unix environment. PC card Pick-Insert experiments were performed to test this implementation. To test the robustness of the architecture, the Supervisor was then transported to a new work cell under a VMS environment. The experiments performed under Supervisor control in both implementations are discussed after a brief explanation of the functional blocks of the Supervisor and the other levels in the hierarchy.

  1. Visualisation methods for large provenance collections in data-intensive collaborative platforms

    NASA Astrophysics Data System (ADS)

    Spinuso, Alessandro; Fligueira, Rosa; Atkinson, Malcolm; Gemuend, Andre

    2016-04-01

    This work investigates improving the methods of visually representing provenance information in the context of modern data-driven scientific research. It explores scenarios where data-intensive workflows systems are serving communities of researchers within collaborative environments, supporting the sharing of data and methods, and offering a variety of computation facilities, including HPC, HTC and Cloud. It focuses on the exploration of big-data visualization techniques aiming at producing comprehensive and interactive views on top of large and heterogeneous provenance data. The same approach is applicable to control-flow and data-flow workflows or to combinations of the two. This flexibility is achieved using the W3C-PROV recommendation as a reference model, especially its workflow oriented profiles such as D-PROV (Messier et al. 2013). Our implementation is based on the provenance records produced by the dispel4py data-intensive processing library (Filgueira et al. 2015). dispel4py is an open-source Python framework for describing abstract stream-based workflows for distributed data-intensive applications, developed during the VERCE project. dispel4py enables scientists to develop their scientific methods and applications on their laptop and then run them at scale on a wide range of e-Infrastructures (Cloud, Cluster, etc.) without making changes. Users can therefore focus on designing their workflows at an abstract level, describing actions, input and output streams, and how they are connected. The dispel4py system then maps these descriptions to the enactment platforms, such as MPI, Storm, multiprocessing. It provides a mechanism which allows users to determine the provenance information to be collected and to analyze it at runtime. For this work we consider alternative visualisation methods for provenance data, from infinite lists and localised interactive graphs, to radial-views. The latter technique has been positively explored in many fields, from text data visualisation to genomics and social networking analysis. Its adoption for provenance has been presented in literature (Borkin et al. 2013) in the context of parent-child relationships across processes, constructed from control-flow information. Computer graphics research has focused on the advantage of this radial distribution of interlinked information and on ways to improve the visual efficiency and tunability of such representations, like the Hierarchical Edge Bundles visualisation method, (Holten et al. 2006), which aims at reducing visual clutter of highly connected structures via the generation of bundles. Our approach explores the potential of the combination of these methods. It serves environments where the size of the provenance collection, coupled with the diversity of the infrastructures and the domain metadata, make the extrapolation of usage trends extremely challenging. Applications of such visualisation systems can engage groups of scientists, data providers and computational engineers, by serving visual snapshots that highlight relationships between an item and its connected processes. We will present examples of comprehensive views on the distribution of processing and data transfers during a workflow's execution in HPC, as well as cross workflows interactions and internal dynamics. The latter in the context of faceted searches on domain metadata values-range. These are obtained from the analysis of real provenance data generated by the processing of seismic traces performed through the VERCE platform.

  2. OCSEGen: Open Components and Systems Environment Generator

    NASA Technical Reports Server (NTRS)

    Tkachuk, Oksana

    2014-01-01

    To analyze a large system, one often needs to break it into smaller components.To analyze a component or unit under analysis, one needs to model its context of execution, called environment, which represents the components with which the unit interacts. Environment generation is a challenging problem, because the environment needs to be general enough to uncover unit errors, yet precise enough to make the analysis tractable. In this paper, we present a tool for automated environment generation for open components and systems. The tool, called OCSEGen, is implemented on top of the Soot framework. We present the tool's current support and discuss its possible future extensions.

  3. DIaaS: Data-Intensive workflows as a service - Enabling easy composition and deployment of data-intensive workflows on Virtual Research Environments

    NASA Astrophysics Data System (ADS)

    Filgueira, R.; Ferreira da Silva, R.; Deelman, E.; Atkinson, M.

    2016-12-01

    We present the Data-Intensive workflows as a Service (DIaaS) model for enabling easy data-intensive workflow composition and deployment on clouds using containers. DIaaS model backbone is Asterism, an integrated solution for running data-intensive stream-based applications on heterogeneous systems, which combines the benefits of dispel4py with Pegasus workflow systems. The stream-based executions of an Asterism workflow are managed by dispel4py, while the data movement between different e-Infrastructures, and the coordination of the application execution are automatically managed by Pegasus. DIaaS combines Asterism framework with Docker containers to provide an integrated, complete, easy-to-use, portable approach to run data-intensive workflows on distributed platforms. Three containers integrate the DIaaS model: a Pegasus node, and an MPI and an Apache Storm clusters. Container images are described as Dockerfiles (available online at http://github.com/dispel4py/pegasus_dispel4py), linked to Docker Hub for providing continuous integration (automated image builds), and image storing and sharing. In this model, all required software (workflow systems and execution engines) for running scientific applications are packed into the containers, which significantly reduces the effort (and possible human errors) required by scientists or VRE administrators to build such systems. The most common use of DIaaS will be to act as a backend of VREs or Scientific Gateways to run data-intensive applications, deploying cloud resources upon request. We have demonstrated the feasibility of DIaaS using the data-intensive seismic ambient noise cross-correlation application (Figure 1). The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The application is submitted via Pegasus (Container1), and Phase1 and Phase2 are executed in the MPI (Container2) and Storm (Container3) clusters respectively. Although both phases could be executed within the same environment, this setup demonstrates the flexibility of DIaaS to run applications across e-Infrastructures. In summary, DIaaS delivers specialized software to execute data-intensive applications in a scalable, efficient, and robust manner reducing the engineering time and computational cost.

  4. 48 CFR 23.901 - Authority.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE... 13423 of January 24, 2007, Strengthening Federal Environmental, Energy, and Transportation Management. (b) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy, and...

  5. 48 CFR 23.901 - Authority.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE... 13423 of January 24, 2007, Strengthening Federal Environmental, Energy, and Transportation Management. (b) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy, and...

  6. 40 CFR 31.11 - State plans.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND COOPERATIVE AGREEMENTS TO STATE AND LOCAL GOVERNMENTS Pre-Award... before receiving grants. Under regulations implementing Executive Order 12372, “Intergovernmental Review...

  7. 48 CFR 23.901 - Authority.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE... 13423 of January 24, 2007, Strengthening Federal Environmental, Energy, and Transportation Management. (b) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy, and...

  8. Florida spaceports : an analysis of the regulatory framework : summary.

    DOT National Transportation Integrated Search

    2010-12-01

    Until recently, government control : has restricted space flight to a few : highly trained persons executing : missions in the public interest : using a very limited number : of facilities and vehicles. This : environment is changing. Imaging : and c...

  9. TARDEC Occupant Protection Seat

    DTIC Science & Technology

    2012-08-28

    Office of the Assistant Sec etary of the Army Installations, E ergy and Enviro ment DoD Executive Agent TARDEC Occupant Protection Seat...Installations, Energy and Environment Technology Transition – Supporting DoD Readiness, Sustainability, and the Warfighter UNCLASSIFIED: Distribution

  10. Moles: Tool-Assisted Environment Isolation with Closures

    NASA Astrophysics Data System (ADS)

    de Halleux, Jonathan; Tillmann, Nikolai

    Isolating test cases from environment dependencies is often desirable, as it increases test reliability and reduces test execution time. However, code that calls non-virtual methods or consumes sealed classes is often impossible to test in isolation. Moles is a new lightweight framework which addresses this problem. For any .NET method, Moles allows test-code to provide alternative implementations, given as .NET delegates, for which C# provides very concise syntax while capturing local variables in a closure object. Using code instrumentation, the Moles framework will redirect calls to provided delegates instead of the original methods. The Moles framework is designed to work together with the dynamic symbolic execution tool Pex to enable automated test generation. In a case study, testing code programmed against the Microsoft SharePoint Foundation API, we achieved full code coverage while running tests in isolation without an actual SharePoint server. The Moles framework integrates with .NET and Visual Studio.

  11. [Liquidation of barriers: realization issues and legislative aspects].

    PubMed

    Półchłopek, T

    1998-01-01

    Designing for the handicapped persons, aiming at the liquidation of the barriers is actually an essential part of the architects activity. It results from the fact that the handicapped persons issue became the interdisciplinary one. The architect, being responsible for the living space and environment creation, is to design the friendly environment for the handicapped persons. The space favourable for the handicapped is favourable for all. There are many aspects of the designing for the handicapped; legislative or execution issues are the examples. The legislative aspect is presented in this paper on the base of the contemporary legal rules of the Polish Republic, whereas the execution aspect is introduced and discussed on the basis of the two projects designed by the Design Bureau in Cracow and being currently in realization. These are: housing & service unit (Boruty-Spiechowicza Str., Cracow) and the Faculty of Philosophy complex at the Jesuits College (Kopernika Str., Cracow).

  12. A Web-Based Monitoring System for Multidisciplinary Design Projects

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Salas, Andrea O.; Weston, Robert P.

    1998-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary computational environments, is defined as a hardware and software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, integrated with an existing framework, can improve these areas of weakness. This paper describes a Web-based system that optimizes and controls the execution sequence of design processes; and monitors the project status and results. The three-stage evolution of the system with increasingly complex problems demonstrates the feasibility of this approach.

  13. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  14. Introducing Triquetrum, A Possible Future for Kepler and Ptolemy II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, Christopher; Billings, Jay Jay

    Triquetrum is an open platform for managing and executing scientific workflows that is under development as an Eclipse project. Both Triquetrum and Kepler use Ptolemy II as their execution engine. Triquetrum presents opportunities and risks for the Kepler community. The opportunities include a possibly larger community for interaction and a path for Kepler to move from Kepler's one-off ant-based build environment towards a more common OSGi-based environment and a way to maintain a stable Ptolemy II core. The risks include the fact that Triquetrum is a fork of Ptolemy II that would result in package name changes and other possiblemore » changes. In addition, Triquetrum is licensed under the Eclipse Public License v1.0, which includes a patent clause that could conflict with the University of California patent clause. This paper describes these opportunities and risks.« less

  15. Network testbed creation and validation

    DOEpatents

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John

    2017-03-21

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.

  16. Concurrent Image Processing Executive (CIPE). Volume 2: Programmer's guide

    NASA Technical Reports Server (NTRS)

    Williams, Winifred I.

    1990-01-01

    This manual is intended as a guide for application programmers using the Concurrent Image Processing Executive (CIPE). CIPE is intended to become the support system software for a prototype high performance science analysis workstation. In its current configuration CIPE utilizes a JPL/Caltech Mark 3fp Hypercube with a Sun-4 host. CIPE's design is capable of incorporating other concurrent architectures as well. CIPE provides a programming environment to applications' programmers to shield them from various user interfaces, file transactions, and architectural complexities. A programmer may choose to write applications to use only the Sun-4 or to use the Sun-4 with the hypercube. A hypercube program will use the hypercube's data processors and optionally the Weitek floating point accelerators. The CIPE programming environment provides a simple set of subroutines to activate user interface functions, specify data distributions, activate hypercube resident applications, and to communicate parameters to and from the hypercube.

  17. Robot Task Commander with Extensible Programming Environment

    NASA Technical Reports Server (NTRS)

    Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)

    2014-01-01

    A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.

  18. Network testbed creation and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices,more » embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.« less

  19. Development of a novel visuomotor integration paradigm by integrating a virtual environment with mobile eye-tracking and motion-capture systems

    PubMed Central

    Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.

    2018-01-01

    Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370

  20. Myria: Scalable Analytics as a Service

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  1. From positive emotionality to internalizing problems: the role of executive functioning in preschoolers.

    PubMed

    Ghassabian, Akhgar; Székely, Eszter; Herba, Catherine M; Jaddoe, Vincent W; Hofman, Albert; Oldehinkel, Albertine J; Verhulst, Frank C; Tiemeier, Henning

    2014-09-01

    Temperament and psychopathology are intimately related; however, research on the prospective associations between positive emotionality, defined as a child's positive mood states and high engagement with the environment, and psychopathology is inconclusive. We examined the longitudinal relation between positive emotionality and internalizing problems in young children from the general population. Furthermore, we explored whether executive functioning mediates any observed association. Within a population-based Dutch birth cohort, we observed positive emotionality in 802 children using the laboratory temperament assessment battery at age 3 years. Child behavior checklist (CBCL) internalizing problems (consisting of Emotionally Reactive, Anxious/Depressed, and Withdrawn scales) were assessed at age 6 years. Parents rated their children's executive functioning at ages 4 years. Children with a lower positive emotionality at age 3 had a higher risk of withdrawn problems at age 6 years (OR = 1.20 per SD decrease in positive emotionality score, 95 % CI: 1.01, 1.42). This effect was not explained by preexisting internalizing problems. This association was partly mediated by more problems in the shifting domain of executive functioning (p < 0.001). We did not find any relation between positive emotionality and the CBCL emotionally reactive or anxious/depressed scales. Although the effect sizes were moderate, our results suggest that low levels of positive emotionality at preschool age can result in children's inflexibility and rigidity later in life. The inflexibility and rigidity are likely to affect the child's drive to engage with the environment, and thereby lead to withdrawn problems. Further research is needed to replicate these findings.

  2. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  3. UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.

    PubMed

    Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L

    2012-03-01

    The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.

  4. The importance of knowledge-based technology.

    PubMed

    Cipriano, Pamela F

    2012-01-01

    Nurse executives are responsible for a workforce that can provide safer and more efficient care in a complex sociotechnical environment. National quality priorities rely on technologies to provide data collection, share information, and leverage analytic capabilities to interpret findings and inform approaches to care that will achieve better outcomes. As a key steward for quality, the nurse executive exercises leadership to provide the infrastructure to build and manage nursing knowledge and instill accountability for following evidence-based practices. These actions contribute to a learning health system where new knowledge is captured as a by-product of care delivery enabled by knowledge-based electronic systems. The learning health system also relies on rigorous scientific evidence embedded into practice at the point of care. The nurse executive optimizes use of knowledge-based technologies, integrated throughout the organization, that have the capacity to help transform health care.

  5. A virtual data language and system for scientific workflow management in data grid environments

    NASA Astrophysics Data System (ADS)

    Zhao, Yong

    With advances in scientific instrumentation and simulation, scientific data is growing fast in both size and analysis complexity. So-called Data Grids aim to provide high performance, distributed data analysis infrastructure for data- intensive sciences, where scientists distributed worldwide need to extract information from large collections of data, and to share both data products and the resources needed to produce and store them. However, the description, composition, and execution of even logically simple scientific workflows are often complicated by the need to deal with "messy" issues like heterogeneous storage formats and ad-hoc file system structures. We show how these difficulties can be overcome via a typed workflow notation called virtual data language, within which issues of physical representation are cleanly separated from logical typing, and by the implementation of this notation within the context of a powerful virtual data system that supports distributed execution. The resulting language and system are capable of expressing complex workflows in a simple compact form, enacting those workflows in distributed environments, monitoring and recording the execution processes, and tracing the derivation history of data products. We describe the motivation, design, implementation, and evaluation of the virtual data language and system, and the application of the virtual data paradigm in various science disciplines, including astronomy, cognitive neuroscience.

  6. Conception through build of an automated liquids processing system for compound management in a low-humidity environment.

    PubMed

    Belval, Richard; Alamir, Ab; Corte, Christopher; DiValentino, Justin; Fernandes, James; Frerking, Stuart; Jenkins, Derek; Rogers, George; Sanville-Ross, Mary; Sledziona, Cindy; Taylor, Paul

    2012-12-01

    Boehringer Ingelheim's Automated Liquids Processing System (ALPS) in Ridgefield, Connecticut, was built to accommodate all compound solution-based operations following dissolution in neat DMSO. Process analysis resulted in the design of two nearly identical conveyor-based subsystems, each capable of executing 1400 × 384-well plate or punch tube replicates per batch. Two parallel-positioned subsystems are capable of independent execution or alternatively executed as a unified system for more complex or higher throughput processes. Primary ALPS functions include creation of high-throughput screening plates, concentration-response plates, and reformatted master stock plates (e.g., 384-well plates from 96-well plates). Integrated operations included centrifugation, unsealing/piercing, broadcast diluent addition, barcode print/application, compound transfer/mix via disposable pipette tips, and plate sealing. ALPS key features included instrument pooling for increased capacity or fail-over situations, programming constructs to associate one source plate to an array of replicate plates, and stacked collation of completed plates. Due to the hygroscopic nature of DMSO, ALPS was designed to operate within a 10% relativity humidity environment. The activities described are the collaborative efforts that contributed to the specification, build, delivery, and acceptance testing between Boehringer Ingelheim Pharmaceuticals, Inc. and the automation integration vendor, Thermo Scientific Laboratory Automation (Burlington, ON, Canada).

  7. An ontology-based semantic configuration approach to constructing Data as a Service for enterprises

    NASA Astrophysics Data System (ADS)

    Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi

    2016-03-01

    To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.

  8. Detecting Heap-Spraying Code Injection Attacks in Malicious Web Pages Using Runtime Execution

    NASA Astrophysics Data System (ADS)

    Choi, Younghan; Kim, Hyoungchun; Lee, Donghoon

    The growing use of web services is increasing web browser attacks exponentially. Most attacks use a technique called heap spraying because of its high success rate. Heap spraying executes a malicious code without indicating the exact address of the code by copying it into many heap objects. For this reason, the attack has a high potential to succeed if only the vulnerability is exploited. Thus, attackers have recently begun using this technique because it is easy to use JavaScript to allocate the heap memory area. This paper proposes a novel technique that detects heap spraying attacks by executing a heap object in a real environment, irrespective of the version and patch status of the web browser. This runtime execution is used to detect various forms of heap spraying attacks, such as encoding and polymorphism. Heap objects are executed after being filtered on the basis of patterns of heap spraying attacks in order to reduce the overhead of the runtime execution. Patterns of heap spraying attacks are based on analysis of how an web browser accesses benign web sites. The heap objects are executed forcibly by changing the instruction register into the address of them after being loaded into memory. Thus, we can execute the malicious code without having to consider the version and patch status of the browser. An object is considered to contain a malicious code if the execution reaches a call instruction and then the instruction accesses the API of system libraries, such as kernel32.dll and ws_32.dll. To change registers and monitor execution flow, we used a debugger engine. A prototype, named HERAD(HEap spRAying Detector), is implemented and evaluated. In experiments, HERAD detects various forms of exploit code that an emulation cannot detect, and some heap spraying attacks that NOZZLE cannot detect. Although it has an execution overhead, HERAD produces a low number of false alarms. The processing time of several minutes is negligible because our research focuses on detecting heap spraying. This research can be applied to existing systems that collect malicious codes, such as Honeypot.

  9. Final Programmatic Environmental Assessment for the Short Range Air Drop Target System

    DTIC Science & Technology

    1998-05-01

    saltwater habitats such as estuaries, they are not typically found in marine environments. Numerous sensitive wildlife areas occur within the biomes and in...96090 Washington, DC 20090-6090 National Oceanic and Atmospheric Administration 1 copy via FedEx National Marine Fisheries Service Washington Science...Center, Building 5 60 10 Executive Boulevard Rockville, MD 20852 Environment and Safety 1 copy via FedEx Marine Environmental Protection Section

  10. Tool Integration and Environment Architectures

    DTIC Science & Technology

    1991-05-01

    include the Interactive Development Environment (IDE) Software Through Pictures (STP), Sabre-C and FrameMaker coalition, and the Verdix Ada Development...System (VADS) APSE, which includes the VADS compiler and choices of CADRE Teamwork or STP and FrameMaker or Interleaf. The key characteristic of...remote procedure execution to achieve a simulation of a homoge- neous repository (i.e., a simulation that the data in a FrameMaker document resides in one

  11. Review of Data Integrity Models in Multi-Level Security Environments

    DTIC Science & Technology

    2011-02-01

    2: (E-1 extension) Only executions described in a (User, TP, (CDIs)) relation are allowed • E-3: Users must be authenticated before allowing TP... authentication and verification procedures for upgrading the integrity of certain objects. The mechanism used to manage access to objects is primarily...that is, the self-consistency of interdependent data and the consistency of real-world environment data. The prevention of authorised users from making

  12. A graph-based evolutionary algorithm: Genetic Network Programming (GNP) and its extension using reinforcement learning.

    PubMed

    Mabu, Shingo; Hirasawa, Kotaro; Hu, Jinglu

    2007-01-01

    This paper proposes a graph-based evolutionary algorithm called Genetic Network Programming (GNP). Our goal is to develop GNP, which can deal with dynamic environments efficiently and effectively, based on the distinguished expression ability of the graph (network) structure. The characteristics of GNP are as follows. 1) GNP programs are composed of a number of nodes which execute simple judgment/processing, and these nodes are connected by directed links to each other. 2) The graph structure enables GNP to re-use nodes, thus the structure can be very compact. 3) The node transition of GNP is executed according to its node connections without any terminal nodes, thus the past history of the node transition affects the current node to be used and this characteristic works as an implicit memory function. These structural characteristics are useful for dealing with dynamic environments. Furthermore, we propose an extended algorithm, "GNP with Reinforcement Learning (GNPRL)" which combines evolution and reinforcement learning in order to create effective graph structures and obtain better results in dynamic environments. In this paper, we applied GNP to the problem of determining agents' behavior to evaluate its effectiveness. Tileworld was used as the simulation environment. The results show some advantages for GNP over conventional methods.

  13. Using an architectural approach to integrate heterogeneous, distributed software components

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Purtilo, James M.

    1995-01-01

    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.

  14. Distributed collaborative decision support environments for predictive awareness

    NASA Astrophysics Data System (ADS)

    McQuay, William K.; Stilman, Boris; Yakhnis, Vlad

    2005-05-01

    The past decade has produced significant changes in the conduct of military operations: asymmetric warfare, the reliance on dynamic coalitions, stringent rules of engagement, increased concern about collateral damage, and the need for sustained air operations. Mission commanders need to assimilate a tremendous amount of information, rapidly assess the enemy"s course of action (eCOA) or possible actions and promulgate their own course of action (COA) - a need for predictive awareness. Decision support tools in a distributed collaborative environment offer the capability of decomposing complex multitask processes and distributing them over a dynamic set of execution assets that include modeling, simulations, and analysis tools. Revolutionary new approaches to strategy generation and assessment such as Linguistic Geometry (LG) permit the rapid development of COA vs. enemy COA (eCOA). LG tools automatically generate and permit the operators to take advantage of winning strategies and tactics for mission planning and execution in near real-time. LG is predictive and employs deep "look-ahead" from the current state and provides a realistic, reactive model of adversary reasoning and behavior. Collaborative environments provide the framework and integrate models, simulations, and domain specific decision support tools for the sharing and exchanging of data, information, knowledge, and actions. This paper describes ongoing research efforts in applying distributed collaborative environments to decision support for predictive mission awareness.

  15. 48 CFR 23.702 - Authorities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... July 31, 2001, Energy Efficient Standby Power Devices. (f) Executive Order 13423 of January 24, 2007... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Contracting for Environmentally Preferable Products and Services 23.702 Authorities...

  16. 77 FR 44313 - 2011 Career Reserved Senior Executive Positions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-27

    ... High Performance Computing and Communications. Chief Financial Officer. Deputy Director, Acquisition... AGRICULTURE... Office of Deputy Director, Communications. Creative Development. Office of the Chief Associate... Officer. Chief Information Officer for NESDIS. Director, Space Environment Center. National Oceanic and...

  17. 48 CFR 23.402 - Authorities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND... Federal Environmental, Energy, and Transportation Management. (d) The Energy Policy Act of 2005, Public Law 109-58. (e) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy...

  18. Volume 1, Sources and migration of highway runoff pollutants--executive summary

    DOT National Transportation Integrated Search

    1984-05-01

    This report summarizes the research undertaken to identify the sources of highway pollutants, and to determine their deposition and accumulation within the highway system and subsequent removal from the highway system to the surrounding environment. ...

  19. THE EPA MULTIMEDIA INTEGRATED MODELING SYSTEM SOFTWARE SUITE

    EPA Science Inventory

    The U.S. EPA is developing a Multimedia Integrated Modeling System (MIMS) framework that will provide a software infrastructure or environment to support constructing, composing, executing, and evaluating complex modeling studies. The framework will include (1) common software ...

  20. 48 CFR 23.402 - Authorities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND... Federal Environmental, Energy, and Transportation Management. (d) The Energy Policy Act of 2005, Public Law 109-58. (e) Executive Order 13514 of October 5, 2009, Federal Leadership in Environmental, Energy...

  1. Milestones Since Last Workshop [Global Positioning System Adjacent Band Compatibility Assessment Workshop V, 10/14/2016

    DOT National Transportation Integrated Search

    2016-10-14

    Milestones Since Last Workshop - Finalized GPS/GNSS receiver test plan and test procedures - Coordinated government and manufacturer participation and executed Non-Disclosure Agreements (NDAs) - Developed/validated radiated RF test environment - Carr...

  2. Federal Report: Government Backs Environment

    ERIC Educational Resources Information Center

    Environmental Science and Technology, 1972

    1972-01-01

    A report of executive, legislative, and judicial action on environmental matters. Budget figures for air, water, and solid waste factors for 1972-1973 are compared in the areas of research, development, and demonstration; abatement and control; enforcement; and program support. (BL)

  3. A case-comparison study of executive functions in alcohol-dependent adults with maternal history of alcoholism.

    PubMed

    Cottencin, Olivier; Nandrino, Jean-Louis; Karila, Laurent; Mezerette, Caroline; Danel, Thierry

    2009-04-01

    As executive dysfunctions frequently accompany alcohol dependence, we suggest that reports of executive dysfunction in alcoholics are actually due, in some case to a maternal history of alcohol misuse (MHA+). A history of maternal alcohol dependence increases the risk for prenatal alcohol exposure to unborn children. These exposures likely contribute to executive dysfunction in adult alcoholics. To assess this problem, we propose a case-comparison study of alcohol-dependent subjects with and without a MHA. Ten alcohol-dependent subjects, with a maternal history of alcoholism (MHA) and paternal history of alcoholism (PHA), were matched with 10 alcohol-dependent people with only a paternal history of alcoholism (PHA). Executive functions (cancellation, Stroop, and trail-making A and B tests) and the presence of a history of three mental disorders (attention deficit hyperactivity disorder, violent behavior while intoxicated, and suicidal behavior) were evaluated in both populations. Alcohol-dependent subjects with MHA showed a significant alteration in executive functions and significantly more disorders related to these functions than PHA subjects. The major measures of executive functioning deficit are duration on task accomplishment in all tests. Rates of ADHD and suicidality were found to be higher in MHA patients compared to the controls. A history of MHA, because of the high risk of PAE (in spite of the potential confounding factors such as environment) must be scrupulously documented when evaluating mental and cognitive disorders in a general population of alcoholics to ensure a better identification of these disorders. It would be helpful to replicate the study with more subjects.

  4. Simulation environment and graphical visualization environment: a COPD use-case

    PubMed Central

    2014-01-01

    Background Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. Results In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. This simulation environment has been validated with the integration of three models: two deterministic, i.e. based on linear and differential equations, and one probabilistic, i.e., based on probability theory. These models have been selected based on the disease under study in this project, i.e., chronic obstructive pulmonary disease. Conclusion It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios. PMID:25471327

  5. Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Briggs, Jeffery L.

    2008-01-01

    The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.

  6. How to get things done. Period.

    PubMed

    Freed, D H

    2000-09-01

    In a competitive environment, hospitals are finding it more important than ever to satisfy their patients, support themselves, and improve their performances. However, experience repeatedly confirms that articulating these strategies is much simpler than actually executing them. While a variety of reasons can be offered, the fact is that effective execution--getting things done--is a strategic differential that brings about significant competitive advantage. This article examines a number of proven best practices for overcoming resistance and actually getting things done. Doing so requires more assumption of risk, but it is both personally and organizationally very invigorating and rewarding.

  7. Sequence design and software environment for real-time navigation of a wireless ferromagnetic device using MRI system and single echo 3D tracking.

    PubMed

    Chanu, A; Aboussouan, E; Tamaz, S; Martel, S

    2006-01-01

    Software architecture for the navigation of a ferromagnetic untethered device in a 1D and 2D phantom environment is briefly described. Navigation is achieved using the real-time capabilities of a Siemens 1.5 T Avanto MRI system coupled with a dedicated software environment and a specially developed 3D tracking pulse sequence. Real-time control of the magnetic core is executed through the implementation of a simple PID controller. 1D and 2D experimental results are presented.

  8. Design and control of a macro-micro robot for precise force applications

    NASA Technical Reports Server (NTRS)

    Wang, Yulun; Mangaser, Amante; Laby, Keith; Jordan, Steve; Wilson, Jeff

    1993-01-01

    Creating a robot which can delicately interact with its environment has been the goal of much research. Primarily two difficulties have made this goal hard to attain. The execution of control strategies which enable precise force manipulations are difficult to implement in real time because such algorithms have been too computationally complex for available controllers. Also, a robot mechanism which can quickly and precisely execute a force command is difficult to design. Actuation joints must be sufficiently stiff, frictionless, and lightweight so that desired torques can be accurately applied. This paper describes a robotic system which is capable of delicate manipulations. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system. Delicate force tasks such as polishing, finishing, cleaning, and deburring, are the target applications of the robot.

  9. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  10. High level language-based robotic control system

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Inventor); Kruetz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)

    1994-01-01

    This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

  11. High level language-based robotic control system

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Inventor); Kreutz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)

    1996-01-01

    This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

  12. Execution of saccadic eye movements affects speed perception

    PubMed Central

    Goettker, Alexander; Braun, Doris I.; Schütz, Alexander C.; Gegenfurtner, Karl R.

    2018-01-01

    Due to the foveal organization of our visual system we have to constantly move our eyes to gain precise information about our environment. Doing so massively alters the retinal input. This is problematic for the perception of moving objects, because physical motion and retinal motion become decoupled and the brain has to discount the eye movements to recover the speed of moving objects. Two different types of eye movements, pursuit and saccades, are combined for tracking. We investigated how the way we track moving targets can affect the perceived target speed. We found that the execution of corrective saccades during pursuit initiation modifies how fast the target is perceived compared with pure pursuit. When participants executed a forward (catch-up) saccade they perceived the target to be moving faster. When they executed a backward saccade they perceived the target to be moving more slowly. Variations in pursuit velocity without corrective saccades did not affect perceptual judgments. We present a model for these effects, assuming that the eye velocity signal for small corrective saccades gets integrated with the retinal velocity signal during pursuit. In our model, the execution of corrective saccades modulates the integration of these two signals by giving less weight to the retinal information around the time of corrective saccades. PMID:29440494

  13. Superfund: Evaluating the Impact of Executive Order 12898

    PubMed Central

    O’Neil, Sandra George

    2007-01-01

    Background The U.S. Environmental Protection Agency (EPA) addresses uncontrolled and abandoned hazardous waste sites throughout the country. Sites that are perceived to be a significant threat to both surrounding populations and the environment can be placed on the U.S. EPA Superfund list and qualify for federal cleanup funds. The equitability of the Superfund program has been questioned; the representation of minority and low-income populations in this cleanup program is lower than would be expected. Thus, minorities and low-income populations may not be benefiting proportionately from this environmental cleanup program. In 1994 President Clinton signed Executive Order 12898 requiring that the U.S. EPA and other federal agencies implement environmental justice policies. These policies were to specifically address the disproportionate environmental effects of federal programs and policies on minority and low-income populations. Objective and Methods I use event history analysis to evaluate the impact of Executive Order 12898 on the equitability of the Superfund program. Discussion Findings suggest that despite environmental justice legislation, Superfund site listings in minority and poor areas are even less likely for sites discovered since the 1994 Executive Order. Conclusion The results of this study indicate that Executive Order 12898 for environmental justice has not increased the equitability of the Superfund program. PMID:17637927

  14. Accelerating next generation sequencing data analysis with system level optimizations.

    PubMed

    Kathiresan, Nagarajan; Temanni, Ramzi; Almabrazi, Hakeem; Syed, Najeeb; Jithesh, Puthen V; Al-Ali, Rashid

    2017-08-22

    Next generation sequencing (NGS) data analysis is highly compute intensive. In-memory computing, vectorization, bulk data transfer, CPU frequency scaling are some of the hardware features in the modern computing architectures. To get the best execution time and utilize these hardware features, it is necessary to tune the system level parameters before running the application. We studied the GATK-HaplotypeCaller which is part of common NGS workflows, that consume more than 43% of the total execution time. Multiple GATK 3.x versions were benchmarked and the execution time of HaplotypeCaller was optimized by various system level parameters which included: (i) tuning the parallel garbage collection and kernel shared memory to simulate in-memory computing, (ii) architecture-specific tuning in the PairHMM library for vectorization, (iii) including Java 1.8 features through GATK source code compilation and building a runtime environment for parallel sorting and bulk data transfer (iv) the default 'on-demand' mode of CPU frequency is over-clocked by using 'performance-mode' to accelerate the Java multi-threads. As a result, the HaplotypeCaller execution time was reduced by 82.66% in GATK 3.3 and 42.61% in GATK 3.7. Overall, the execution time of NGS pipeline was reduced to 70.60% and 34.14% for GATK 3.3 and GATK 3.7 respectively.

  15. Calculation and use of an environment's characteristic software metric set

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.

    1985-01-01

    Since both cost/quality and production environments differ, this study presents an approach for customizing a characteristic set of software metrics to an environment. The approach is applied in the Software Engineering Laboratory (SEL), a NASA Goddard production environment, to 49 candidate process and product metrics of 652 modules from six (51,000 to 112,000 lines) projects. For this particular environment, the method yielded the characteristic metric set (source lines, fault correction effort per executable statement, design effort, code effort, number of I/O parameters, number of versions). The uses examined for a characteristic metric set include forecasting the effort for development, modification, and fault correction of modules based on historical data.

  16. 48 CFR 23.801 - Authorities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Ozone-Depleting Substances 23.801 Authorities. (a) Title VI of the Clean Air Act... Environmental, Energy, and Transportation Management. (d) Executive Order 13514 of October 5, 2009, Federal...

  17. 48 CFR 23.801 - Authorities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... PROGRAMS ENVIRONMENT, ENERGY AND WATER EFFICIENCY, RENEWABLE ENERGY TECHNOLOGIES, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Ozone-Depleting Substances 23.801 Authorities. (a) Title VI of the Clean Air Act... Environmental, Energy, and Transportation Management. (d) Executive Order 13514 of October 5, 2009, Federal...

  18. 40 CFR 1515.8 - Appeals.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY FREEDOM OF INFORMATION ACT PROCEDURES Procedures...: FOIA Appeals Officer, Council on Environmental Quality, Executive Office of the President, 722 Jackson... Appeals Officer to review the determination made by the Freedom of Information Officer. The letter should...

  19. 40 CFR 1515.8 - Appeals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY FREEDOM OF INFORMATION ACT PROCEDURES Procedures...: FOIA Appeals Officer, Council on Environmental Quality, Executive Office of the President, 722 Jackson... Appeals Officer to review the determination made by the Freedom of Information Officer. The letter should...

  20. 40 CFR 1515.8 - Appeals.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY FREEDOM OF INFORMATION ACT PROCEDURES Procedures...: FOIA Appeals Officer, Council on Environmental Quality, Executive Office of the President, 722 Jackson... Appeals Officer to review the determination made by the Freedom of Information Officer. The letter should...

  1. 40 CFR 1515.8 - Appeals.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY FREEDOM OF INFORMATION ACT PROCEDURES Procedures...: FOIA Appeals Officer, Council on Environmental Quality, Executive Office of the President, 722 Jackson... Appeals Officer to review the determination made by the Freedom of Information Officer. The letter should...

  2. Report: EPA Needs to Conduct Environmental Justice Reviews of Its Programs, Policies, and Activities

    EPA Pesticide Factsheets

    Report #2006-P-00034, September 18, 2006. Our survey results showed that EPA senior management has not sufficiently directed program and regional offices to conduct environment justice reviews in accordance with Executive Order 12898.

  3. Environmental Professional’s Guide to Lean and Six Sigma: Executive Summary

    EPA Pesticide Factsheets

    Introduction to the guide that describes how Lean and Six Sigma relate to the environment and provides guidance on how environmental professionals can connect with Lean and Six Sigma activities to generate better environmental and operational results.

  4. 48 CFR 904.702 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Integration of Environment Safety, and Health into Work Planning and Execution, or the Radiation Protection... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Applicability. 904.702 Section 904.702 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL ADMINISTRATIVE MATTERS...

  5. Integrated Clinical Training for Space Flight Using a High-Fidelity Patient Simulator in a Simulated Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Hurst, Victor; Doerr, Harold K.; Polk, J. D.; Schmid, Josef; Parazynksi, Scott; Kelly, Scott

    2007-01-01

    This viewgraph presentation reviews the use of telemedicine in a simulated microgravity environment using a patient simulator. For decades, telemedicine techniques have been used in terrestrial environments by many cohorts with varied clinical experience. The success of these techniques has been recently expanded to include microgravity environments aboard the International Space Station (ISS). In order to investigate how an astronaut crew medical officer will execute medical tasks in a microgravity environment, while being remotely guided by a flight surgeon, the Medical Operation Support Team (MOST) used the simulated microgravity environment provided aboard DC-9 aircraft teams of crew medical officers, and remote flight surgeons performed several tasks on a patient simulator.

  6. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points

  7. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.

  8. Data-Flow Based Model Analysis

    NASA Technical Reports Server (NTRS)

    Saad, Christian; Bauer, Bernhard

    2010-01-01

    The concept of (meta) modeling combines an intuitive way of formalizing the structure of an application domain with a high expressiveness that makes it suitable for a wide variety of use cases and has therefore become an integral part of many areas in computer science. While the definition of modeling languages through the use of meta models, e.g. in Unified Modeling Language (UML), is a well-understood process, their validation and the extraction of behavioral information is still a challenge. In this paper we present a novel approach for dynamic model analysis along with several fields of application. Examining the propagation of information along the edges and nodes of the model graph allows to extend and simplify the definition of semantic constraints in comparison to the capabilities offered by e.g. the Object Constraint Language. Performing a flow-based analysis also enables the simulation of dynamic behavior, thus providing an "abstract interpretation"-like analysis method for the modeling domain.

  9. "Towering genius disdains a beaten path" Abraham Lincoln.

    PubMed

    Ferguson-Paré, Mary; Mitchell, Gail J; Perkin, Karen; Stevenson, Lynn

    2002-01-01

    We see nursing leadership existing at all levels in nursing...all nurses leading. Nurse executives within academic health environments across Canada will be influencing health policy directions and dialogue within the profession nationally. They will be contributing to the development of a national agenda for nursing practice, education, research and leadership. These nurse executives will lead in a way that makes an invigorating impact on human service in health care environments and they will be dedicated to preparing the nursing leaders of tomorrow. The Academy of Canadian Executive Nurses will connect with the Office of Nursing Policy, Canadian Nurses Association, Canadian Association of University Schools of Nursing, Association of Canadian Academic Health Care Organizations and others to develop position papers regarding key issues such as patient safety, health human resource planning and leadership in the Canadian health care system. Our definition of professional nursing practice, fully integrated with education and research, will be advanced through these endeavours. The end result of a strong individual and collective voice will be improved patient outcomes supported by professional nursing practice in positive practice environments. This paper is intended to stimulate dialogue among nursing leaders in Canada, dislodge us from a long and traditional path, and place us firmly in a new millennium of leadership for the profession and practice of nursing, a style of leadership that is needed, wanted and supported by nurses and the clients we serve. It is the responsibility of those of us who lead in academic health science centres to be courageous for the students we support, the puactitioners we lead and the renewal of the profession. We are the testing ground for nursing research, and need to be the source of innovation for nursing practice. It is incumbent on us to leap forward to engage a new vision of the professional practice of nursing with a reconfigured work design and work environment compatible with the new economy, workplace and workforce.

  10. Software environment for implementing engineering applications on MIMD computers

    NASA Technical Reports Server (NTRS)

    Lopez, L. A.; Valimohamed, K. A.; Schiff, S.

    1990-01-01

    In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.

  11. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  12. A Thematic Analysis of Self-described Authentic Leadership Behaviors Among Experienced Nurse Executives.

    PubMed

    Alexander, Catherine; Lopez, Ruth Palan

    2018-01-01

    The aim of this study is to understand the behaviors experienced nurse executives use to create healthy work environments (HWEs). The constructs of authentic leadership formed the conceptual framework for the study. The American Association of Critical-Care Nurses recommends authentic leadership as the preferred style of leadership for creating and sustaining HWEs. Behaviors associated with authentic leadership in nursing are not well understood. A purposive sample of 17 experienced nurse executives were recruited from across the United States for this qualitative study. Thematic analysis was used to analyze the in-depth, semistructured interviews. Four constructs of authentic leaders were supported and suggest unique applications of each including self-awareness (a private and professional self), balanced processing (open hearted), transparency (limiting exposure), and moral leadership (nursing compass). Authentic leadership may provide a sound foundation to support nursing leadership practices; however, its application to the discipline requires additional investigation.

  13. The relationship between perceived social capital and the health promotion willingness of companies: a systematic telephone survey with chief executive officers in the information and communication technology sector.

    PubMed

    Jung, Julia; Nitzsche, Anika; Ernstmann, Nicole; Driller, Elke; Wasem, Jürgen; Stieler-Lorenz, Brigitte; Pfaff, Holger

    2011-03-01

    This study examines the association between perceived social capital and health promotion willingness (HPW) of companies from a chief executive officer's perspective. Data for the cross-sectional study were collected through telephone interviews with one chief executive officer from randomly selected companies within the German information and communication technology sector. A hierarchical multivariate logistic regression analysis was performed. Results of the logistic regression analysis of data from a total of n = 522 interviews suggest that higher values of perceived social capital are associated with pronounced HPW in companies (odds ratio = 3.78; 95% confidence intervals, 2.24 to 6.37). Our findings suggest that characteristics of high social capital, such as an established environment of trust as well as a feeling of common values and convictions could help promote HPW.

  14. A Cloud-Based Simulation Architecture for Pandemic Influenza Simulation

    PubMed Central

    Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas

    2011-01-01

    High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089

  15. Short-term memory, executive control, and children's route learning.

    PubMed

    Purser, Harry R M; Farran, Emily K; Courbois, Yannick; Lemahieu, Axelle; Mellier, Daniel; Sockeel, Pascal; Blades, Mark

    2012-10-01

    The aim of this study was to investigate route-learning ability in 67 children aged 5 to 11years and to relate route-learning performance to the components of Baddeley's model of working memory. Children carried out tasks that included measures of verbal and visuospatial short-term memory and executive control and also measures of verbal and visuospatial long-term memory; the route-learning task was conducted using a maze in a virtual environment. In contrast to previous research, correlations were found between both visuospatial and verbal memory tasks-the Corsi task, short-term pattern span, digit span, and visuospatial long-term memory-and route-learning performance. However, further analyses indicated that these relationships were mediated by executive control demands that were common to the tasks, with long-term memory explaining additional unique variance in route learning. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. BioBlocks: Programming Protocols in Biology Made Easier.

    PubMed

    Gupta, Vishal; Irimia, Jesús; Pau, Iván; Rodríguez-Patón, Alfonso

    2017-07-21

    The methods to execute biological experiments are evolving. Affordable fluid handling robots and on-demand biology enterprises are making automating entire experiments a reality. Automation offers the benefit of high-throughput experimentation, rapid prototyping, and improved reproducibility of results. However, learning to automate and codify experiments is a difficult task as it requires programming expertise. Here, we present a web-based visual development environment called BioBlocks for describing experimental protocols in biology. It is based on Google's Blockly and Scratch, and requires little or no experience in computer programming to automate the execution of experiments. The experiments can be specified, saved, modified, and shared between multiple users in an easy manner. BioBlocks is open-source and can be customized to execute protocols on local robotic platforms or remotely, that is, in the cloud. It aims to serve as a de facto open standard for programming protocols in Biology.

  17. Advanced Caution and Warning System, Final Report - 2011

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Aaseng, Gordon; Iverson, David; McCann, Robert S.; Robinson, Peter; Dittemore, Gary; Liolios, Sotirios; Baskaran, Vijay; Johnson, Jeremy; Lee, Charles; hide

    2013-01-01

    The work described in this report is a continuation of the ACAWS work funded in fiscal year (FY) 2010 under the Exploration Technology Development Program (ETDP), Integrated Systems Health Management (ISHM) project. In FY 2010, we developed requirements for an ACAWS system and vetted the requirements with potential users via a concept demonstration system. In FY 2011, we developed a working prototype of aspects of that concept, with placeholders for technologies to be fully developed in future phases of the project. The objective is to develop general capability to assist operators with system health monitoring and failure diagnosis. Moreover, ACAWS was integrated with the Discrete Controls (DC) task of the Autonomous Systems and Avionics (ASA) project. The primary objective of DC is to demonstrate an electronic and interactive procedure display environment and multiple levels of automation (automatic execution by computer, execution by computer if the operator consents, and manual execution by the operator).

  18. The SERENITY Runtime Framework

    NASA Astrophysics Data System (ADS)

    Crespo, Beatriz Gallego-Nicasio; Piñuela, Ana; Soria-Rodriguez, Pedro; Serrano, Daniel; Maña, Antonio

    The SERENITY Runtime Framework (SRF) provides support for applications at runtime, by managing S&D Solutions and monitoring the systems’ context. The main functionality of the SRF, amongst others, is to provide S&D Solutions, by means of Executable Components, in response to applications security requirements. Runtime environment is defined in SRF through the S&D Library and Context Manager components. S&D Library is a local S&D Artefact repository, and stores S&D Classes, S&D Patterns and S&D Implementations. The Context Manager component is in charge of storing and management of the information used by the SRF to select the most appropriate S&D Pattern for a given scenario. The management of the execution of the Executable Component, as running realizations of the S&D Patterns, including instantiation, de-activation and control, as well as providing communication and monitoring mechanisms, besides the recovery and reconfiguration aspects, complete the list of tasks performed by the SRF.

  19. 40 CFR 1.33 - Office of Administration and Resources Management.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Management Interns, OHRM establishes policies; assesses and projects Agency executive needs and workforce... out human resources management projects of special interest to Agency management. The Office... Management. 1.33 Section 1.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL STATEMENT OF...

  20. 7 CFR 1942.310 - Other considerations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) Floodplains and wetlands. All projects must comply with Executive Order 11988 “Floodplain Management” and... grant program established by this subpart is to improve business, industry and employment in rural areas... projects that minimize the potential to adversely impact the environment. (2) Technical assistance. The...

Top