Sample records for distributed real-time dataflow

  1. Common spaceborne multicomputer operating system and development environment

    NASA Technical Reports Server (NTRS)

    Craymer, L. G.; Lewis, B. F.; Hayes, P. J.; Jones, R. L.

    1994-01-01

    A preliminary technical specification for a multicomputer operating system is developed. The operating system is targeted for spaceborne flight missions and provides a broad range of real-time functionality, dynamic remote code-patching capability, and system fault tolerance and long-term survivability features. Dataflow concepts are used for representing application algorithms. Functional features are included to ensure real-time predictability for a class of algorithms which require data-driven execution on an iterative steady state basis. The development environment supports the development of algorithm code, design of control parameters, performance analysis, simulation of real-time dataflow applications, and compiling and downloading of the resulting application.

  2. Performance analysis of a large-grain dataflow scheduling paradigm

    NASA Technical Reports Server (NTRS)

    Young, Steven D.; Wills, Robert W.

    1993-01-01

    A paradigm for scheduling computations on a network of multiprocessors using large-grain data flow scheduling at run time is described and analyzed. The computations to be scheduled must follow a static flow graph, while the schedule itself will be dynamic (i.e., determined at run time). Many applications characterized by static flow exist, and they include real-time control and digital signal processing. With the advent of computer-aided software engineering (CASE) tools for capturing software designs in dataflow-like structures, macro-dataflow scheduling becomes increasingly attractive, if not necessary. For parallel implementations, using the macro-dataflow method allows the scheduling to be insulated from the application designer and enables the maximum utilization of available resources. Further, by allowing multitasking, processor utilizations can approach 100 percent while they maintain maximum speedup. Extensive simulation studies are performed on 4-, 8-, and 16-processor architectures that reflect the effects of communication delays, scheduling delays, algorithm class, and multitasking on performance and speedup gains.

  3. Dataflow computing approach in high-speed digital simulation

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Karplus, W. J.

    1984-01-01

    New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.

  4. ATAMM enhancement and multiprocessing performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.

    1994-01-01

    The algorithm to architecture mapping model (ATAAM) is a Petri net based model which provides a strategy for periodic execution of a class of real-time algorithms on multicomputer dataflow architecture. The execution of large-grained, decision-free algorithms on homogeneous processing elements is studied. The ATAAM provides an analytical basis for calculating performance bounds on throughput characteristics. Extension of the ATAMM as a strategy for cyclo-static scheduling provides for a truly distributed ATAMM multicomputer operating system. An ATAAM testbed consisting of a centralized graph manager and three processors is described using embedded firmware on 68HC11 microcontrollers.

  5. Parceling the Power.

    ERIC Educational Resources Information Center

    Hiatt, Blanchard; Gwynne, Peter

    1984-01-01

    To make computing power broadly available and truly friendly, both soft and hard meshing and synchronization problems will have to be solved. Possible solutions and research related to these problems are discussed. Topics considered include compilers, parallelism, networks, distributed sensors, dataflow, CEDAR system (using dataflow principles),…

  6. Modeling heterogeneous processor scheduling for real time systems

    NASA Technical Reports Server (NTRS)

    Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.

    1994-01-01

    A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.

  7. Software Epistemology

    DTIC Science & Technology

    2016-03-01

    in-vitro decision to incubate a startup, Lexumo [7], which is developing a commercial Software as a Service ( SaaS ) vulnerability assessment...LTS Label Transition System MUSE Mining and Understanding Software Enclaves RTEMS Real-Time Executive for Multi-processor Systems SaaS Software ...as a Service SSA Static Single Assignment SWE Software Epistemology UD/DU Def-Use/Use-Def Chains (Dataflow Graph)

  8. Integrated Topside (InTop) Joint Navy - Industry Open Architecture Study

    DTIC Science & Technology

    2010-09-10

    57  Fig. 6.1-1 — Modified VRT dataflow key...68  Fig. 6.1-2 — Sample building block description using VRT nomenclature...converter (RF/IF) and the IF to RF converter (IF/RF) uses the VITA-49 format, also referred to as VRT (VITA Radio Transport), for real-time flow of signal

  9. Co Modeling and Co Synthesis of Safety Critical Multi threaded Embedded Software for Multi Core Embedded Platforms

    DTIC Science & Technology

    2017-03-20

    computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing   data sources, gathering and

  10. Multiverse data-flow control.

    PubMed

    Schindler, Benjamin; Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Peikert, Ronald

    2013-06-01

    In this paper, we present a data-flow system which supports comparative analysis of time-dependent data and interactive simulation steering. The system creates data on-the-fly to allow for the exploration of different parameters and the investigation of multiple scenarios. Existing data-flow architectures provide no generic approach to handle modules that perform complex temporal processing such as particle tracing or statistical analysis over time. Moreover, there is no solution to create and manage module data, which is associated with alternative scenarios. Our solution is based on generic data-flow algorithms to automate this process, enabling elaborate data-flow procedures, such as simulation, temporal integration or data aggregation over many time steps in many worlds. To hide the complexity from the user, we extend the World Lines interaction techniques to control the novel data-flow architecture. The concept of multiple, special-purpose cursors is introduced to let users intuitively navigate through time and alternative scenarios. Users specify only what they want to see, the decision which data are required is handled automatically. The concepts are explained by taking the example of the simulation and analysis of material transport in levee-breach scenarios. To strengthen the general applicability, we demonstrate the investigation of vortices in an offline-simulated dam-break data set.

  11. A DICOM-based 2nd generation Molecular Imaging Data Grid implementing the IHE XDS-i integration profile.

    PubMed

    Lee, Jasper; Zhang, Jianguo; Park, Ryan; Dagliyan, Grant; Liu, Brent; Huang, H K

    2012-07-01

    A Molecular Imaging Data Grid (MIDG) was developed to address current informatics challenges in archival, sharing, search, and distribution of preclinical imaging studies between animal imaging facilities and investigator sites. This manuscript presents a 2nd generation MIDG replacing the Globus Toolkit with a new system architecture that implements the IHE XDS-i integration profile. Implementation and evaluation were conducted using a 3-site interdisciplinary test-bed at the University of Southern California. The 2nd generation MIDG design architecture replaces the initial design's Globus Toolkit with dedicated web services and XML-based messaging for dedicated management and delivery of multi-modality DICOM imaging datasets. The Cross-enterprise Document Sharing for Imaging (XDS-i) integration profile from the field of enterprise radiology informatics was adopted into the MIDG design because streamlined image registration, management, and distribution dataflow are likewise needed in preclinical imaging informatics systems as in enterprise PACS application. Implementation of the MIDG is demonstrated at the University of Southern California Molecular Imaging Center (MIC) and two other sites with specified hardware, software, and network bandwidth. Evaluation of the MIDG involves data upload, download, and fault-tolerance testing scenarios using multi-modality animal imaging datasets collected at the USC Molecular Imaging Center. The upload, download, and fault-tolerance tests of the MIDG were performed multiple times using 12 collected animal study datasets. Upload and download times demonstrated reproducibility and improved real-world performance. Fault-tolerance tests showed that automated failover between Grid Node Servers has minimal impact on normal download times. Building upon the 1st generation concepts and experiences, the 2nd generation MIDG system improves accessibility of disparate animal-model molecular imaging datasets to users outside a molecular imaging facility's LAN using a new architecture, dataflow, and dedicated DICOM-based management web services. Productivity and efficiency of preclinical research for translational sciences investigators has been further streamlined for multi-center study data registration, management, and distribution.

  12. Compile-Time Schedulability Analysis of Communicating Concurrent Programs

    DTIC Science & Technology

    2006-06-28

    synchronize via the read and write operations on the FIFO channels. These operations have been implemented with the help of semaphores , which...3 1.1.2 Synchronous Dataflow . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.3 Boolean Dataflow...described by concurrent programs . . . . . . . . . 4 1.3 A synchronous dataflow model, its topology matrix, and repetition vector . 10 1.4 Select and

  13. PLAStiCC: Predictive Look-Ahead Scheduling for Continuous dataflows on Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor K.

    2014-05-27

    Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application’s throughput. In this paper wemore » propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based look-ahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from public and private IaaS clouds. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.« less

  14. eHive: an artificial intelligence workflow system for genomic analysis.

    PubMed

    Severin, Jessica; Beal, Kathryn; Vilella, Albert J; Fitzgerald, Stephen; Schuster, Michael; Gordon, Leo; Ureta-Vidal, Abel; Flicek, Paul; Herrero, Javier

    2010-05-11

    The Ensembl project produces updates to its comparative genomics resources with each of its several releases per year. During each release cycle approximately two weeks are allocated to generate all the genomic alignments and the protein homology predictions. The number of calculations required for this task grows approximately quadratically with the number of species. We currently support 50 species in Ensembl and we expect the number to continue to grow in the future. We present eHive, a new fault tolerant distributed processing system initially designed to support comparative genomic analysis, based on blackboard systems, network distributed autonomous agents, dataflow graphs and block-branch diagrams. In the eHive system a MySQL database serves as the central blackboard and the autonomous agent, a Perl script, queries the system and runs jobs as required. The system allows us to define dataflow and branching rules to suit all our production pipelines. We describe the implementation of three pipelines: (1) pairwise whole genome alignments, (2) multiple whole genome alignments and (3) gene trees with protein homology inference. Finally, we show the efficiency of the system in real case scenarios. eHive allows us to produce computationally demanding results in a reliable and efficient way with minimal supervision and high throughput. Further documentation is available at: http://www.ensembl.org/info/docs/eHive/.

  15. Master-slave mixed arrays for data-flow computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, T.L.; Fisher, P.D.

    1983-01-01

    Control cells (masters) and computation cells (slaves) are mixed in regular geometric patterns to form reconfigurable arrays known as master-slave mixed arrays (MSMAS). Interconnections of the corners and edges of the hexagonal control cells and the edges of the hexagonal computation cells are used to construct synchronous and asynchronous communication networks, which support local computation and local communication. Data-driven computations result in self-directed ring pipelines within the MSMA, and composite data-flow computations are executed in a pipelined fashion. By viewing an MSMA as a computing network of tightly-linked ring pipelines, data-flow programs can be uniformly distributed over these pipelines formore » efficient resource utilisation. 9 references.« less

  16. Portable inference engine: An extended CLIPS for real-time production systems

    NASA Technical Reports Server (NTRS)

    Le, Thach; Homeier, Peter

    1988-01-01

    The present C-Language Integrated Production System (CLIPS) architecture has not been optimized to deal with the constraints of real-time production systems. Matching in CLIPS is based on the Rete Net algorithm, whose assumption of working memory stability might fail to be satisfied in a system subject to real-time dataflow. Further, the CLIPS forward-chaining control mechanism with a predefined conflict resultion strategy may not effectively focus the system's attention on situation-dependent current priorties, or appropriately address different kinds of knowledge which might appear in a given application. Portable Inference Engine (PIE) is a production system architecture based on CLIPS which attempts to create a more general tool while addressing the problems of real-time expert systems. Features of the PIE design include a modular knowledge base, a modified Rete Net algorithm, a bi-directional control strategy, and multiple user-defined conflict resolution strategies. Problems associated with real-time applications are analyzed and an explanation is given for how the PIE architecture addresses these problems.

  17. Decaf: Decoupled Dataflows for In Situ High-Performance Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreher, M.; Peterka, T.

    Decaf is a dataflow system for the parallel communication of coupled tasks in an HPC workflow. The dataflow can perform arbitrary data transformations ranging from simply forwarding data to complex data redistribution. Decaf does this by allowing the user to allocate resources and execute custom code in the dataflow. All communication through the dataflow is efficient parallel message passing over MPI. The runtime for calling tasks is entirely message-driven; Decaf executes a task when all messages for the task have been received. Such a messagedriven runtime allows cyclic task dependencies in the workflow graph, for example, to enact computational steeringmore » based on the result of downstream tasks. Decaf includes a simple Python API for describing the workflow graph. This allows Decaf to stand alone as a complete workflow system, but Decaf can also be used as the dataflow layer by one or more other workflow systems to form a heterogeneous task-based computing environment. In one experiment, we couple a molecular dynamics code with a visualization tool using the FlowVR and Damaris workflow systems and Decaf for the dataflow. In another experiment, we test the coupling of a cosmology code with Voronoi tessellation and density estimation codes using MPI for the simulation, the DIY programming model for the two analysis codes, and Decaf for the dataflow. Such workflows consisting of heterogeneous software infrastructures exist because components are developed separately with different programming models and runtimes, and this is the first time that such heterogeneous coupling of diverse components was demonstrated in situ on HPC systems.« less

  18. eHive: An Artificial Intelligence workflow system for genomic analysis

    PubMed Central

    2010-01-01

    Background The Ensembl project produces updates to its comparative genomics resources with each of its several releases per year. During each release cycle approximately two weeks are allocated to generate all the genomic alignments and the protein homology predictions. The number of calculations required for this task grows approximately quadratically with the number of species. We currently support 50 species in Ensembl and we expect the number to continue to grow in the future. Results We present eHive, a new fault tolerant distributed processing system initially designed to support comparative genomic analysis, based on blackboard systems, network distributed autonomous agents, dataflow graphs and block-branch diagrams. In the eHive system a MySQL database serves as the central blackboard and the autonomous agent, a Perl script, queries the system and runs jobs as required. The system allows us to define dataflow and branching rules to suit all our production pipelines. We describe the implementation of three pipelines: (1) pairwise whole genome alignments, (2) multiple whole genome alignments and (3) gene trees with protein homology inference. Finally, we show the efficiency of the system in real case scenarios. Conclusions eHive allows us to produce computationally demanding results in a reliable and efficient way with minimal supervision and high throughput. Further documentation is available at: http://www.ensembl.org/info/docs/eHive/. PMID:20459813

  19. Latency in Distributed Acquisition and Rendering for Telepresence Systems.

    PubMed

    Ohl, Stephan; Willert, Malte; Staadt, Oliver

    2015-12-01

    Telepresence systems use 3D techniques to create a more natural human-centered communication over long distances. This work concentrates on the analysis of latency in telepresence systems where acquisition and rendering are distributed. Keeping latency low is important to immerse users in the virtual environment. To better understand latency problems and to identify the source of such latency, we focus on the decomposition of system latency into sub-latencies. We contribute a model of latency and show how it can be used to estimate latencies in a complex telepresence dataflow network. To compare the estimates with real latencies in our prototype, we modify two common latency measurement methods. This presented methodology enables the developer to optimize the design, find implementation issues and gain deeper knowledge about specific sources of latency.

  20. Proceedings: Sisal `93

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J.T.

    1993-10-01

    This report contain papers on: Programmability and performance issues; The case of an iterative partial differential equation solver; Implementing the kernal of the Australian Region Weather Prediction Model in Sisal; Even and quarter-even prime length symmetric FFTs and their Sisal Implementations; Top-down thread generation for Sisal; Overlapping communications and computations on NUMA architechtures; Compiling technique based on dataflow analysis for funtional programming language Valid; Copy elimination for true multidimensional arrays in Sisal 2.0; Increasing parallelism for an optimization that reduces copying in IF2 graphs; Caching in on Sisal; Cache performance of Sisal Vs. FORTRAN; FFT algorithms on a shared-memory multiprocessor;more » A parallel implementation of nonnumeric search problems in Sisal; Computer vision algorithms in Sisal; Compilation of Sisal for a high-performance data driven vector processor; Sisal on distributed memory machines; A virtual shared addressing system for distributed memory Sisal; Developing a high-performance FFT algorithm in Sisal for a vector supercomputer; Implementation issues for IF2 on a static data-flow architechture; and Systematic control of parallelism in array-based data-flow computation. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less

  1. Automating the Processing of Earth Observation Data

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wan-Lin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth science is to build a "sensor web": an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving this vision will require automation not only in the scheduling of the observations but also in the processing of the resulting data. To address this need, we are developing a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products.

  2. A logical model of cooperating rule-based systems

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.

    1989-01-01

    A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.

  3. MAX - An advanced parallel computer for space applications

    NASA Technical Reports Server (NTRS)

    Lewis, Blair F.; Bunker, Robert L.

    1991-01-01

    MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.

  4. Task scheduling in dataflow computer architectures

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.

  5. Rapid Prototyping of High Performance Signal Processing Applications

    NASA Astrophysics Data System (ADS)

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high-level application specification consisting of topological patterns in various aspects of the design flow. 2. We have formulated the core functional dataflow (CFDF) model of computation, which can be used to model a wide variety of deterministic dynamic dataflow behaviors. We have also presented key features of the CFDF model and tools based on these features. These tools provide support for heterogeneous dataflow behaviors, an intuitive and common framework for functional specification, support for functional simulation, portability from several existing dataflow models to CFDF, integrated emphasis on minimally-restricted specification of actor functionality, and support for efficient static, quasi-static, and dynamic scheduling techniques. 3. We have developed a generalized scheduling technique for CFDF graphs based on decomposition of a CFDF graph into static graphs that interact at run-time. Furthermore, we have refined this generalized scheduling technique using a new notion of "mode grouping," which better exposes the underlying static behavior. We have also developed a scheduling technique for a class of dynamic applications that generates parameterized looped schedules (PLSs), which can handle dynamic dataflow behavior without major limitations on compile-time predictability. 4. We have demonstrated the use of dataflow-based approaches for design and implementation of radio astronomy DSP systems using an application example of a tunable digital downconverter (TDD) for spectrometers. Design and implementation of this module has been an integral part of this thesis work. This thesis demonstrates a design flow that consists of a high-level software prototype, analysis, and simulation using the dataflow interchange format (DIF) tool, and integration of this design with the existing tool flow for the target implementation on an FPGA platform, called interconnect break-out board (IBOB). We have also explored the trade-off between low hardware cost for fixed configurations of digital downconverters and flexibility offered by TDD designs. 5. This thesis has contributed significantly to the development and release of the latest version of a graph package oriented toward models of computation (MoCGraph). Our enhancements to this package include support for tree data structures, and generalized schedule trees (GSTs), which provide a useful data structure for a wide variety of schedule representations. Our extensions to the MoCGraph package provided key support for the CFDF model, and functional simulation capabilities in the DIF package.

  6. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538

  7. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.

    PubMed

    Cieślik, Marcin; Mura, Cameron

    2011-02-25

    Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.

  8. Isomorphisms between Petri nets and dataflow graphs

    NASA Technical Reports Server (NTRS)

    Kavi, Krishna M.; Buckles, Billy P.; Bhat, U. Narayan

    1987-01-01

    Dataflow graphs are a generalized model of computation. Uninterpreted dataflow graphs with nondeterminism resolved via probabilities are shown to be isomorphic to a class of Petri nets known as free choice nets. Petri net analysis methods are readily available in the literature and this result makes those methods accessible to dataflow research. Nevertheless, combinatorial explosion can render Petri net analysis inoperative. Using a previously known technique for decomposing free choice nets into smaller components, it is demonstrated that, in principle, it is possible to determine aspects of the overall behavior from the particular behavior of components.

  9. Dataflow Design Tool: User's Manual

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1996-01-01

    The Dataflow Design Tool is a software tool for selecting a multiprocessor scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. The software tool implements graph-search algorithms and analysis techniques based on the dataflow paradigm. Dataflow analyses provided by the software are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool provides performance optimization through the inclusion of artificial precedence constraints among the schedulable tasks. The user interface and tool capabilities are described. Examples are provided to demonstrate the analysis, scheduling, and optimization functions facilitated by the tool.

  10. Compiler analysis for irregular problems in FORTRAN D

    NASA Technical Reports Server (NTRS)

    Vonhanxleden, Reinhard; Kennedy, Ken; Koelbel, Charles; Das, Raja; Saltz, Joel

    1992-01-01

    We developed a dataflow framework which provides a basis for rigorously defining strategies to make use of runtime preprocessing methods for distributed memory multiprocessors. In many programs, several loops access the same off-processor memory locations. Our runtime support gives us a mechanism for tracking and reusing copies of off-processor data. A key aspect of our compiler analysis strategy is to determine when it is safe to reuse copies of off-processor data. Another crucial function of the compiler analysis is to identify situations which allow runtime preprocessing overheads to be amortized. This dataflow analysis will make it possible to effectively use the results of interprocedural analysis in our efforts to reduce interprocessor communication and the need for runtime preprocessing.

  11. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers

    PubMed Central

    Filipovic, Nenad D.

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851

  12. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  13. Vulnerability detection using data-flow graphs and SMT solvers

    DTIC Science & Technology

    2016-10-31

    concerns. The framework is modular and pipelined to allow scalable analysis on distributed systems. Our vulnerability detection framework employs machine...Design We designed the framework to be modular to enable flexible reuse and extendibility. In its current form, our framework performs the following

  14. Advanced Operating System Technologies

    NASA Astrophysics Data System (ADS)

    Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro

    In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC. Our work started in the second half of 1994, with a research agreement between CERN and Chorus Systemes (France), world leader in the micro-kernel OS technology. The Chorus OS is targeted to distributed real-time applications, and it can very efficiently support different "OS personalities" in the same environment, like Posix, UNIX, and a CORBA compliant distributed object architecture. Projects are being set-up to verify the suitability of our work for LHC applications, we are building a scaled-down prototype of the DAQ system foreseen for the CMS experiment at LHC, where we will directly test our protocols and where we will be able to make measurements and benchmarks, guiding our development and allowing us to build an analytical model of the system, suitable for simulation and large scale verification.

  15. Solving Partial Differential Equations in a data-driven multiprocessor environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.

    1988-12-31

    Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less

  16. Information and Networking Technologies in Russian Libraries. UDT Occasional Paper #1.

    ERIC Educational Resources Information Center

    International Federation of Library Associations and Institutions, Ottawa (Ontario). International Office for Universal Dataflow & Telecommunications.

    The Universal Dataflow and Telecommunications (UDT) Occasional Papers distribute information on the use of networking, information technology and telecommunications by and of interest to the international library community. This occasional paper is comprised of three papers related to technologies in Russian libraries: (1) "The First Russian…

  17. A software tool for dataflow graph scheduling

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1994-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on multiple processors. The dataflow paradigm is very useful in exposing the parallelism inherent in algorithms. It provides a graphical and mathematical model which describes a partial ordering of algorithm tasks based on data precedence.

  18. Parallel design patterns for a low-power, software-defined compressed video encoder

    NASA Astrophysics Data System (ADS)

    Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar

    2011-06-01

    Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.

  19. VORBrouter: A dynamic data routing system for Real-Time Seismic networks

    NASA Astrophysics Data System (ADS)

    Hansen, T.; Vernon, F.; Lindquist, K.; Orcutt, J.

    2004-12-01

    For anyone who has managed a moderately complex buffered real-time data transport system, the need for reliable adaptive data transport is clear. The ROADNet VORBrouter system, an extension to the ROADNet data catalog system [AGU-2003, Dynamic Dataflow Topology Monitoring for Real-time Seismic Networks], allows dynamic routing of real-time seismic data from sensor to end-user. Traditional networks consist of a series of data buffer computers with data transport interconnections configured by hand. This allows for arbitrarily complex data networks, which can often exceed full comprehension by network administrators, sometimes resulting in data loops or accidental data cutoff. In order to manage data transport systems in the event of a network failure, a network administrator must be called upon to change the data transport paths and to recover the missing data. Using VORBrouter, administrators can sleep at night while still providing 7/24 uninterupted data streams at realistic cost. This software package uses information from the ROADNet data catalog system to route packets around failed link outages and to new consumers in real-time. Dynamic data routing protocols operating on top of the Antelope Data buffering layer allow authorized users to request data sets from their local buffer and to have them delivered from anywhere within the network of buffers. The VORBrouter software also allows for dynamic routing around network outages, and the elimination of duplicate data paths within the network, while maintaining the nearly lossless data transport features exhibited by the underlying Antelope system. We present the design of the VORBrouter system, its features, limitations and some future research directions.

  20. Checking for Circular Dependencies in Distributed Stream Programs

    DTIC Science & Technology

    2011-08-29

    extensions to express new complexities more conve- nient. Teleport messaging ( TMG ) in the StreamIt language [30] is an example. 1.1 StreamIt Language...dynamicities to an FIR computation Thies et al. in [30] give a TMG model for distributed stream pro- grams. TMG is a mechanism that implements control...messages for stream graphs. The TMG mechanism is designed not to interfere with original dataflow graphs’ structures and scheduling, therefore a key

  1. Study of Thread Level Parallelism in a Video Encoding Application for Chip Multiprocessor Design

    NASA Astrophysics Data System (ADS)

    Debes, Eric; Kaine, Greg

    2002-11-01

    In media applications there is a high level of available thread level parallelism (TLP). In this paper we study the intra TLP in a video encoder. We show that a well-distributed highly optimized encoder running on a symmetric multiprocessor (SMP) system can run 3.2 faster on a 4-way SMP machine than on a single processor. The multithreaded encoder running on an SMP system is then used to understand the requirements of a chip multiprocessor (CMP) architecture, which is one possible architectural direction to better exploit TLP. In the framework of this study, we use a software approach to evaluate the dataflow between processors for the video encoder running on an SMP system. An estimation of the dataflow is done with L2 cache miss event counters using Intel® VTuneTM performance analyzer. The experimental measurements are compared to theoretical results.

  2. Modeling and prototyping of biometric systems using dataflow programming

    NASA Astrophysics Data System (ADS)

    Minakova, N.; Petrov, I.

    2018-01-01

    The development of biometric systems is one of the labor-intensive processes. Therefore, the creation and analysis of approaches and techniques is an urgent task at present. This article presents a technique of modeling and prototyping biometric systems based on dataflow programming. The technique includes three main stages: the development of functional blocks, the creation of a dataflow graph and the generation of a prototype. A specially developed software modeling environment that implements this technique is described. As an example of the use of this technique, an example of the implementation of the iris localization subsystem is demonstrated. A variant of modification of dataflow programming is suggested to solve the problem related to the undefined order of block activation. The main advantage of the presented technique is the ability to visually display and design the model of the biometric system, the rapid creation of a working prototype and the reuse of the previously developed functional blocks.

  3. Dataflow Architectures.

    DTIC Science & Technology

    1986-02-12

    of Electrical Engineering and Computer Science. MIT, Cambridge, MA,June 1983. 33. Hiraki , K.. K. Nishida and T. Shimada. "Evaluation of Associative...J. R. Gurd. "A Practical Dataflow Computer". Computer 15,2 (February 1982), 51-57. 50. Yuba, T., T. Shimada, K. Hiraki , and H. Kashiwagi. Sigma-i: A

  4. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  5. Managing Parallelism and Resources in Scientific Dataflow Programs

    DTIC Science & Technology

    1990-03-01

    1983. [52] K. Hiraki , K. Nishida, S. Sekiguchi, and T. Shimada. Maintainence architecture and its LSI implementation of a dataflow computer with a... Hiraki , and K. Nishida. An architecture of a data flow machine and its evaluation. In Proceedings of CompCon 84, pages 486-490. IEEE, 1984. [84] N

  6. Automated Data Processing as an AI Planning Problem

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wanlin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth Science is to build a "sensor web"; an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving his vision will require automation not only in the scheduling of the observations but also in the processing af tee resulting data. Ta address this need, we have developed a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products. Data processing domains are substantially different from other planning domains that have been explored, and this has led us to substantially different choices in terms of representation and algorithms. We discuss some of these differences and discuss the approach we have adopted.

  7. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  8. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    NASA Astrophysics Data System (ADS)

    Vandelli, Wainer; ATLAS TDAQ Collaboration

    2010-04-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  9. An Advanced Commanding and Telemetry System

    NASA Astrophysics Data System (ADS)

    Hill, Maxwell G. G.

    The Loral Instrumentation System 500 configured as an Advanced Commanding and Telemetry System (ACTS) supports the acquisition of multiple telemetry downlink streams, and simultaneously supports multiple uplink command streams for today's satellite vehicles. By using industry and federal standards, the system is able to support, without relying on a host computer, a true distributed dataflow architecture that is complemented by state-of-the-art RISC-based workstations and file servers.

  10. Dataflow models for fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Papadopoulos, G. M.

    1984-01-01

    Dataflow concepts are used to generate a unified hardware/software model of redundant physical systems which are prone to faults. Basic results in input congruence and synchronization are shown to reduce to a simple model of data exchanges between processing sites. Procedures are given for the construction of congruence schemata, the distinguishing features of any correctly designed redundant system.

  11. Wireless communication of real-time ultrasound data and control

    NASA Astrophysics Data System (ADS)

    Tobias, Richard J.

    2015-03-01

    The Internet of Things (IoT) is expected to grow to 26 billion connected devices by 2020, plus the PC, smart phone, and tablet segment that includes mobile Health (mHealth) connected devices is projected to account for another 7.3 billion units by 2020. This paper explores some of the real-time constraints on the data-flow and control of a wireless connected ultrasound machine. The paper will define an ultrasound server and the capabilities necessary for real-time use of the device. The concept of an ultrasound server wirelessly (or over any network) connected to multiple lightweight clients on devices like an iPad, iPhone, or Android-based tablet, smartphone and other network-attached displays (i.e., Google Glass) is explored. Latency in the ultrasound data stream is one of the key areas to measure and to focus on keeping as small as possible (<30ms) so that the ultrasound operator can see what is at the probe at that moment, instead of where the probe was a short period earlier. By keeping the latency less than 30ms, the operator will feel like the data he sees on the wireless connected devices is running in real-time with the operator. The second parameter is the management of bandwidth. At minimum we need to be able to see 20 frames-per- second. It is possible to achieve ultrasound in triplex mode at >20 frames-per-second on a properly configured wireless network. The ultrasound server needs to be designed to accept multiple ultrasound data clients and multiple control clients. A description of the server and some of its key features will be described.

  12. SciFlo: Semantically-Enabled Grid Workflow for Collaborative Science

    NASA Astrophysics Data System (ADS)

    Yunck, T.; Wilson, B. D.; Raskin, R.; Manipon, G.

    2005-12-01

    SciFlo is a system for Scientific Knowledge Creation on the Grid using a Semantically-Enabled Dataflow Execution Environment. SciFlo leverages Simple Object Access Protocol (SOAP) Web Services and the Grid Computing standards (WS-* standards and the Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable SOAP Services, native executables, local command-line scripts, and python codes into a distributed computing flow (a graph of operators). SciFlo's XML dataflow documents can be a mixture of concrete operators (fully bound operations) and abstract template operators (late binding via semantic lookup). All data objects and operators can be both simply typed (simple and complex types in XML schema) and semantically typed using controlled vocabularies (linked to OWL ontologies such as SWEET). By exploiting ontology-enhanced search and inference, one can discover (and automatically invoke) Web Services and operators that have been semantically labeled as performing the desired transformation, and adapt a particular invocation to the proper interface (number, types, and meaning of inputs and outputs). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. A Visual Programming tool is also being developed, but it is not required. Once an analysis has been specified for a granule or day of data, it can be easily repeated with different control parameters and over months or years of data. SciFlo uses and preserves semantics, and also generates and infers new semantic annotations. Specifically, the SciFlo engine uses semantic metadata to understand (infer) what it is doing and potentially improve the data flow; preserves semantics by saving links to the semantics of (metadata describing) the input datasets, related datasets, and the data transformations (algorithms) used to generate downstream products; generates new metadata by allowing the user to add semantic annotations to the generated data products (or simply accept automatically generated provenance annotations); and infers new semantic metadata by understanding and applying logic to the semantics of the data and the transformations performed. Much ontology development still needs to be done but, nevertheless, SciFlo documents provide a substrate for using and preserving more semantics as ontologies develop. We will give a live demonstration of the growing SciFlo network using an example dataflow in which atmospheric temperature and water vapor profiles from three Earth Observing System (EOS) instruments are retrieved using SOAP (geo-location query & data access) services, co-registered, and visually & statistically compared on demand (see http://sciflo.jpl.nasa.gov for more information).

  13. Nodes on ropes: a comprehensive data and control flow for steering ensemble simulations.

    PubMed

    Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Hirsch, Christian; Schindler, Benjamin; Blöschl, Günther; Gröller, M Eduard

    2011-12-01

    Flood disasters are the most common natural risk and tremendous efforts are spent to improve their simulation and management. However, simulation-based investigation of actions that can be taken in case of flood emergencies is rarely done. This is in part due to the lack of a comprehensive framework which integrates and facilitates these efforts. In this paper, we tackle several problems which are related to steering a flood simulation. One issue is related to uncertainty. We need to account for uncertain knowledge about the environment, such as levee-breach locations. Furthermore, the steering process has to reveal how these uncertainties in the boundary conditions affect the confidence in the simulation outcome. Another important problem is that the simulation setup is often hidden in a black-box. We expose system internals and show that simulation steering can be comprehensible at the same time. This is important because the domain expert needs to be able to modify the simulation setup in order to include local knowledge and experience. In the proposed solution, users steer parameter studies through the World Lines interface to account for input uncertainties. The transport of steering information to the underlying data-flow components is handled by a novel meta-flow. The meta-flow is an extension to a standard data-flow network, comprising additional nodes and ropes to abstract parameter control. The meta-flow has a visual representation to inform the user about which control operations happen. Finally, we present the idea to use the data-flow diagram itself for visualizing steering information and simulation results. We discuss a case-study in collaboration with a domain expert who proposes different actions to protect a virtual city from imminent flooding. The key to choosing the best response strategy is the ability to compare different regions of the parameter space while retaining an understanding of what is happening inside the data-flow system. © 2011 IEEE

  14. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  15. The Many Ways Data Must Flow.

    ERIC Educational Resources Information Center

    La Brecque, Mort

    1984-01-01

    To break the bottleneck inherent in today's linear computer architectures, parallel schemes (which allow computers to perform multiple tasks at one time) are being devised. Several of these schemes are described. Dataflow devices, parallel number-crunchers, programing languages, and a device based on a neurological model are among the areas…

  16. Exploiting loop level parallelism in nonprocedural dataflow programs

    NASA Technical Reports Server (NTRS)

    Gokhale, Maya B.

    1987-01-01

    Discussed are how loop level parallelism is detected in a nonprocedural dataflow program, and how a procedural program with concurrent loops is scheduled. Also discussed is a program restructuring technique which may be applied to recursive equations so that concurrent loops may be generated for a seemingly iterative computation. A compiler which generates C code for the language described below has been implemented. The scheduling component of the compiler and the restructuring transformation are described.

  17. Sequencing and fan-out mechanism for causing a set of at least two sequential instructions to be performed in a dataflow processing computer

    DOEpatents

    Grafe, Victor G.; Hoch, James E.

    1993-01-01

    A sequencing and data fanout mechanism is provided for a dataflow processor is activated by an input token which causes a sequence of operations to occur by initiating a first instruction to act on data contained within the token and then executing a sequential thread of instructions identified by either a repeat count and an offset within the token, or by an offset within each preceding instruction.

  18. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    NASA Astrophysics Data System (ADS)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  19. Proceedings of the Second NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar (Editor)

    2010-01-01

    This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.

  20. A Metrics-Based Approach to Intrusion Detection System Evaluation for Distributed Real-Time Systems

    DTIC Science & Technology

    2002-04-01

    Based Approach to Intrusion Detection System Evaluation for Distributed Real - Time Systems Authors: G. A. Fink, B. L. Chappell, T. G. Turner, and...Distributed, Security. 1 Introduction Processing and cost requirements are driving future naval combat platforms to use distributed, real - time systems of...distributed, real - time systems . As these systems grow more complex, the timing requirements do not diminish; indeed, they may become more constrained

  1. Real-Time Optimization and Control of Next-Generation Distribution

    Science.gov Websites

    Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution Infrastructure This project develops innovative, real-time optimization and control methods for next-generation

  2. Prototyping scalable digital signal processing systems for radio astronomy using dataflow models

    NASA Astrophysics Data System (ADS)

    Sane, N.; Ford, J.; Harris, A. I.; Bhattacharyya, S. S.

    2012-05-01

    There is a growing trend toward using high-level tools for design and implementation of radio astronomy digital signal processing (DSP) systems. Such tools, for example, those from the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER), are usually platform-specific, and lack high-level, platform-independent, portable, scalable application specifications. This limits the designer's ability to experiment with designs at a high-level of abstraction and early in the development cycle. We address some of these issues using a model-based design approach employing dataflow models. We demonstrate this approach by applying it to the design of a tunable digital downconverter (TDD) used for narrow-bandwidth spectroscopy. Our design is targeted toward an FPGA platform, called the Interconnect Break-out Board (IBOB), that is available from the CASPER. We use the term TDD to refer to a digital downconverter for which the decimation factor and center frequency can be reconfigured without the need for regenerating the hardware code. Such a design is currently not available in the CASPER DSP library. The work presented in this paper focuses on two aspects. First, we introduce and demonstrate a dataflow-based design approach using the dataflow interchange format (DIF) tool for high-level application specification, and we integrate this approach with the CASPER tool flow. Secondly, we explore the trade-off between the flexibility of TDD designs and the low hardware cost of fixed-configuration digital downconverter (FDD) designs that use the available CASPER DSP library. We further explore this trade-off in the context of a two-stage downconversion scheme employing a combination of TDD or FDD designs.

  3. Geographically distributed real-time digital simulations using linear prediction

    DOE PAGES

    Liu, Ren; Mohanpurkar, Manish; Panwar, Mayank; ...

    2016-07-04

    Real time simulation is a powerful tool for analyzing, planning, and operating modern power systems. For analyzing the ever evolving power systems and understanding complex dynamic and transient interactions larger real time computation capabilities are essential. These facilities are interspersed all over the globe and to leverage unique facilities geographically-distributed real-time co-simulation in analyzing the power systems is pursued and presented. However, the communication latency between different simulator locations may lead to inaccuracy in geographically distributed real-time co-simulations. In this paper, the effect of communication latency on geographically distributed real-time co-simulation is introduced and discussed. In order to reduce themore » effect of the communication latency, a real-time data predictor, based on linear curve fitting is developed and integrated into the distributed real-time co-simulation. Two digital real time simulators are used to perform dynamic and transient co-simulations with communication latency and predictor. Results demonstrate the effect of the communication latency and the performance of the real-time data predictor to compensate it.« less

  4. Geographically distributed real-time digital simulations using linear prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ren; Mohanpurkar, Manish; Panwar, Mayank

    Real time simulation is a powerful tool for analyzing, planning, and operating modern power systems. For analyzing the ever evolving power systems and understanding complex dynamic and transient interactions larger real time computation capabilities are essential. These facilities are interspersed all over the globe and to leverage unique facilities geographically-distributed real-time co-simulation in analyzing the power systems is pursued and presented. However, the communication latency between different simulator locations may lead to inaccuracy in geographically distributed real-time co-simulations. In this paper, the effect of communication latency on geographically distributed real-time co-simulation is introduced and discussed. In order to reduce themore » effect of the communication latency, a real-time data predictor, based on linear curve fitting is developed and integrated into the distributed real-time co-simulation. Two digital real time simulators are used to perform dynamic and transient co-simulations with communication latency and predictor. Results demonstrate the effect of the communication latency and the performance of the real-time data predictor to compensate it.« less

  5. Linking innovative measurement technologies (ConMon and Dataflow© systems) for high-resolution temporal and spatial dissolved oxygen criteria assessment.

    PubMed

    O'Leary, C A; Perry, E; Bayard, A; Wainger, L; Boynton, W R

    2015-10-01

    One consequence of nutrient-induced eutrophication in shallow estuarine waters is the occurrence of hypoxia and anoxia that has serious impacts on biota, habitats, and biogeochemical cycles of important elements. Because of the important role of dissolved oxygen (DO) on these ecosystem features, a variety of DO criteria have been established as indicators of system condition. However, DO dynamics are complex and vary on time scales ranging from diel to decadal and spatial scales from meters to multiple kilometers. Because of these complexities, determining DO criteria attainment or failure remains difficult. We propose a method for linking two common measurement technologies for shallow water DO criteria assessment using a Chesapeake Bay tributary as a test case. Dataflow© is a spatially intensive (30-60-m collection intervals) system used to map surface water conditions at the whole estuary scale, and ConMon is a high-frequency (15-min collection intervals) fixed station approach. The former technology is effective with spatial descriptions but poor regarding temporal resolution, while the latter provides excellent temporal but very limited spatial resolution. Our methodology for combining the strengths of these measurement technologies involved a sequence of steps. First, a statistical model of surface water DO dynamics, based on temporally intense ConMon data, was developed. The results of this model were used to calculate daily DO minimum concentrations. Second, this model was then inserted into Dataflow©-generated spatial maps of DO conditions and used to adjust measured DO concentrations to daily minimum concentrations. This information was used to assess DO criteria compliance at the full tributary scale. Model results indicated that it is vital to consider the short-term time scale DO criteria across both space and time concurrently. Large fluctuations in DO occurred within a 24-h time period, and DO dynamics varied across the length and width of the tributary. The overall result provided a more detailed and realistic characterization of the shallow water DO minimum conditions that have the potential to be extended to other tributaries and regions. Broader applications of this model include instantaneous DO criteria assessment, utilizing this model in combination with aerial remote sensing, and developing DO amplitude as an indicator of impaired water bodies.

  6. Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1995-01-01

    A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.

  7. Dataflow-Based Implementation of Layered Sensing Applications on High-Performance Embedded Processors

    DTIC Science & Technology

    2013-03-01

    time (milliseconds) GFlops Comparison to GPU peak performance (%) Cascade Gaussian Filtering 13 45.19 6.3 Difference of Gaussian 0.512 152...values for the GPU-targeted actor implementations in terms of Giga Floating Point Operations Per Second ( GFLOPS ). Our GFLOPS calculation for an actor...kernels. The results for GFLOPS are provided in Table . The actors were implemented on an NVIDIA GTX260 GPU, which provides 715 GFLOPS as peak

  8. Research in Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.

    1997-01-01

    This document summarizes the progress we have made on our study of issues concerning the schedulability of real-time systems. Our study has produced several results in the scalability issues of distributed real-time systems. In particular, we have used our techniques to resolve schedulability issues in distributed systems with end-to-end requirements. During the next year (1997-98), we propose to extend the current work to address the modeling and workload characterization issues in distributed real-time systems. In particular, we propose to investigate the effect of different workload models and component models on the design and the subsequent performance of distributed real-time systems.

  9. The Case For Prediction-based Best-effort Real-time Systems.

    DTIC Science & Technology

    1999-01-01

    Real - time Systems Peter A. Dinda Loukas Kallivokas January...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited DTIG QUALBR DISSECTED X The Case For Prediction-based Best-effort Real - time Systems Peter...Mellon University Pittsburgh, PA 15213 A version of this paper appeared in the Seventh Workshop on Parallel and Distributed Real - Time Systems

  10. Real-Time Embedded High Performance Computing: Communications Scheduling.

    DTIC Science & Technology

    1995-06-01

    real - time operating system must explicitly limit the degradation of the timing performance of all processes as the number of processes...adequately supported by a real - time operating system , could compound the development problems encountered in the past. Many experts feel that the... real - time operating system support for an MPP, although they all provide some support for distributed real-time applications. A distributed real

  11. Data Grid Management Systems

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne

    2004-01-01

    The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.

  12. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    NASA Astrophysics Data System (ADS)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  13. Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.

    2006-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.

  14. Distributed Issues for Ada Real-Time Systems

    DTIC Science & Technology

    1990-07-23

    NUMBERS Distributed Issues for Ada Real - Time Systems MDA 903-87- C- 0056 S. AUTHOR(S) Thomas E. Griest 7. PERFORMING ORGANiZATION NAME(S) AND ADORESS(ES) 8...considerations. I Adding to the problem of distributed real - time systems is the issue of maintaining a common sense of time among all of the processors...because -omeone is waiting for the final output of a very large set of computations. However in real - time systems , consistent meeting of short-term

  15. A High Performance VLSI Computer Architecture For Computer Graphics

    NASA Astrophysics Data System (ADS)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  16. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    PubMed

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  17. NASA Data Acquisition System Software Development for Rocket Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Herbert, Phillip W., Sr.; Elliot, Alex C.; Graves, Andrew R.

    2015-01-01

    Current NASA propulsion test facilities include Stennis Space Center in Mississippi, Marshall Space Flight Center in Alabama, Plum Brook Station in Ohio, and White Sands Test Facility in New Mexico. Within and across these centers, a diverse set of data acquisition systems exist with different hardware and software platforms. The NASA Data Acquisition System (NDAS) is a software suite designed to operate and control many critical aspects of rocket engine testing. The software suite combines real-time data visualization, data recording to a variety formats, short-term and long-term acquisition system calibration capabilities, test stand configuration control, and a variety of data post-processing capabilities. Additionally, data stream conversion functions exist to translate test facility data streams to and from downstream systems, including engine customer systems. The primary design goals for NDAS are flexibility, extensibility, and modularity. Providing a common user interface for a variety of hardware platforms helps drive consistency and error reduction during testing. In addition, with an understanding that test facilities have different requirements and setups, the software is designed to be modular. One engine program may require real-time displays and data recording; others may require more complex data stream conversion, measurement filtering, or test stand configuration management. The NDAS suite allows test facilities to choose which components to use based on their specific needs. The NDAS code is primarily written in LabVIEW, a graphical, data-flow driven language. Although LabVIEW is a general-purpose programming language; large-scale software development in the language is relatively rare compared to more commonly used languages. The NDAS software suite also makes extensive use of a new, advanced development framework called the Actor Framework. The Actor Framework provides a level of code reuse and extensibility that has previously been difficult to achieve using LabVIEW. The

  18. The Early Warning System(EWS) as First Stage to Generate and Develop Shake Map for Bucharest to Deep Vrancea Earthquakes

    NASA Astrophysics Data System (ADS)

    Marmureanu, G.; Ionescu, C.; Marmureanu, A.; Grecu, B.; Cioflan, C.

    2007-12-01

    EWS made by NIEP is the first European system for real-time early detection and warning of the seismic waves in case of strong deep earthquakes. EWS uses the time interval (28-32 seconds) between the moment when earthquake is detected by the borehole and surface local accelerometers network installed in the epicenter area (Vrancea) and the arrival time of the seismic waves in the protected area, to deliver timely integrated information in order to enable actions to be taken before a main destructive shaking takes place. Early warning system is viewed as part of an real-time information system that provide rapid information, about an earthquake impeding hazard, to the public and disaster relief organizations before (early warning) and after a strong earthquake (shake map).This product is fitting in with other new product on way of National Institute for Earth Physics, that is, the shake map which is a representation of ground shaking produced by an event and it will be generated automatically following large Vrancea earthquakes. Bucharest City is located in the central part of the Moesian platform (age: Precambrian and Paleozoic) in the Romanian Plain, at about 140 km far from Vrancea area. Above a Cretaceous and a Miocene deposit (with the bottom at roundly 1,400 m of depth), a Pliocene shallow water deposit (~ 700m thick) was settled. The surface geology consists mainly of Quaternary alluvial deposits. Later loess covered these deposits and the two rivers crossing the city (Dambovita and Colentina) carved the present landscape. During the last century Bucharest suffered heavy damage and casualties due to 1940 (Mw = 7.7) and 1977 (Mw = 7.4) Vrancea earthquakes. For example, 32 high tall buildings collapsed and more then 1500 people died during the 1977 event. The innovation with comparable or related systems worldwide is that NIEP will use the EWS to generate a virtual shake map for Bucharest (140 km away of epicentre) immediately after the magnitude is estimated (in 3-4 seconds after the detection in epicentre) and later make corrections by using real time dataflow from each K2 accelerometers installed in Bucharest area, inclusively nonlinear effects. Thus, developing of a near real-time shake map for Bucharest urban area is of highest interest, providing valuable information to the civil defense, decision makers and general public on the area where the ground motion is most severe. EWS made by NIEP can be considered the first stage to generate and develop the shake map for Bucharest to deep Vrancea earthquakes.

  19. PRAIS: Distributed, real-time knowledge-based systems made easy

    NASA Technical Reports Server (NTRS)

    Goldstein, David G.

    1990-01-01

    This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.

  20. System Definition Document

    DOT National Transportation Integrated Search

    1996-06-12

    The Gary-Chicago-Milwaukee (GCM) Corridor Transportation Information Center : (C-TIC) System Definition Document describes the C-TIC concept and defines the : high level processes and dataflows. The Requirements Specification together : with the Inte...

  1. Distributed simulation using a real-time shared memory network

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Mattern, Duane L.; Wong, Edmond; Musgrave, Jeffrey L.

    1993-01-01

    The Advanced Control Technology Branch of the NASA Lewis Research Center performs research in the area of advanced digital controls for aeronautic and space propulsion systems. This work requires the real-time implementation of both control software and complex dynamical models of the propulsion system. We are implementing these systems in a distributed, multi-vendor computer environment. Therefore, a need exists for real-time communication and synchronization between the distributed multi-vendor computers. A shared memory network is a potential solution which offers several advantages over other real-time communication approaches. A candidate shared memory network was tested for basic performance. The shared memory network was then used to implement a distributed simulation of a ramjet engine. The accuracy and execution time of the distributed simulation was measured and compared to the performance of the non-partitioned simulation. The ease of partitioning the simulation, the minimal time required to develop for communication between the processors and the resulting execution time all indicate that the shared memory network is a real-time communication technique worthy of serious consideration.

  2. Proceedings of the International Conference on Parallel Architectures and Compilation Techniques Held 24-26 August 1994 in Montreal, Canada

    DTIC Science & Technology

    1994-08-26

    an Itegrated Circuit Global Router. In Proc. of PPEARS 88, pages 138-145, 1988. [7] S. Sakai, Y. Yamaguchi, K. Hiraki , Y. Kodama, and T. Yuba. An...Computer Architecture, 1992. [5] S. Sakai, Y. Yamaguchi, K. Hiraki , Y. Kodama, and T. Yuba. An architecture of a data-flow single chip processor. In Int...EM-4 and sparing time for tech- nical discussions. We also thank Prof. Kei Hiraki at the Univ. of Tokyo for his helpful comments. Hidehiko Masuhara’s

  3. Software Tools for Formal Specification and Verification of Distributed Real-Time Systems.

    DTIC Science & Technology

    1997-09-30

    set of software tools for specification and verification of distributed real time systems using formal methods. The task of this SBIR Phase II effort...to be used by designers of real - time systems for early detection of errors. The mathematical complexity of formal specification and verification has

  4. A Distributed Computing Network for Real-Time Systems

    DTIC Science & Technology

    1980-11-03

    NUSC Tttchnical Docum&nt 5932 3 November 1980 A Distributed Computing N ~etwork for Real ·- Time Systems Gordon · E. Morrison Combat Control...megabit, 10 megabit, and 20 megabit networks. These values are well within the J state-of-the-art and are typical for real - time systems similar to

  5. A Distributed Computing Network for Real-Time Systems.

    DTIC Science & Technology

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  6. Real-time modeling and simulation of distribution feeder and distributed resources

    NASA Astrophysics Data System (ADS)

    Singh, Pawan

    The analysis of the electrical system dates back to the days when analog network analyzers were used. With the advent of digital computers, many programs were written for power-flow and short circuit analysis for the improvement of the electrical system. Real-time computer simulations can answer many what-if scenarios in the existing or the proposed power system. In this thesis, the standard IEEE 13-Node distribution feeder is developed and validated on a real-time platform OPAL-RT. The concept and the challenges of the real-time simulation are studied and addressed. Distributed energy resources include some of the commonly used distributed generation and storage devices like diesel engine, solar photovoltaic array, and battery storage system are modeled and simulated on a real-time platform. A microgrid encompasses a portion of an electric power distribution which is located downstream of the distribution substation. Normally, the microgrid operates in paralleled mode with the grid; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. The microgrid can operate in grid connected and islanded mode, both the operating modes are studied in the last chapter. Towards the end, a simple microgrid controller modeled and simulated on the real-time platform is developed for energy management and protection for the microgrid.

  7. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  8. Real-time generation of the Wigner distribution of complex functions using phase conjugation in photorefractive materials.

    PubMed

    Sun, P C; Fainman, Y

    1990-09-01

    An optical processor for real-time generation of the Wigner distribution of complex amplitude functions is introduced. The phase conjugation of the input signal is accomplished by a highly efficient self-pumped phase conjugator based on a 45 degrees -cut barium titanate photorefractive crystal. Experimental results on the real-time generation of Wigner distribution slices for complex amplitude two-dimensional optical functions are presented and discussed.

  9. Energy Systems Integration News | Energy Systems Integration Facility |

    Science.gov Websites

    the electric grid. These control systems will enable real-time coordination between distributed energy with real-time voltage and frequency control at the level of the home or distributed energy resource least for electricity. A real-time connection to weather forecasts and energy prices would allow the

  10. Tactical AI in Real Time Strategy Games

    DTIC Science & Technology

    2015-03-26

    TACTICAL AI IN REAL TIME STRATEGY GAMES THESIS Donald A. Gruber, Capt, USAF AFIT-ENG-MS-15-M-021 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE...protection in the United States. AFIT-ENG-MS-15-M-021 TACTICAL AI IN REAL TIME STRATEGY GAMES THESIS Presented to the Faculty Department of Electrical...DISTRIBUTION STATEMENT A APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-021 TACTICAL AI IN REAL TIME STRATEGY GAMES THESIS Donald A

  11. Information-Systems Data-Flow Diagram

    NASA Technical Reports Server (NTRS)

    Blosiu, J. O.

    1983-01-01

    Single form presents clear picture of entire system. Form giving relational review of data flow well suited to information system planning, analysis, engineering, and management. Used to review data flow for developing system or one already in use.

  12. Real-Time Support on IEEE 802.11 Wireless Ad-Hoc Networks: Reality vs. Theory

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Suh, Jinwoo

    The usable throughput of an IEEE 802.11 system for an application is much less than the raw bandwidth. Although 802.11b has a theoretical maximum of 11Mbps, more than half of the bandwidth is consumed by overhead leaving at most 5Mbps of usable bandwidth. Considering this characteristic, this paper proposes and analyzes a real-time distributed scheduling scheme based on the existing IEEE 802.11 wireless ad-hoc networks, using USC/ISI's Power Aware Sensing Tracking and Analysis (PASTA) hardware platform. We compared the distributed real-time scheduling scheme with the real-time polling scheme to meet deadline, and compared a measured real bandwidth with a theoretical result. The theoretical and experimental results show that the distributed scheduling scheme can guarantee real-time traffic and enhances the performance up to 74% compared with polling scheme.

  13. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    PubMed

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  14. Designing Class Methods from Dataflow Diagrams

    NASA Astrophysics Data System (ADS)

    Shoval, Peretz; Kabeli-Shani, Judith

    A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.

  15. Objects Architecture: A Comprehensive Design Approach for Real-Time, Distributed, Fault-Tolerant, Reactive Operating Systems.

    DTIC Science & Technology

    1987-09-01

    real - time operating system should be efficient from the real-time point...5,8]) system naming scheme. 3.2 Protecting Objects Real-time embedded systems usually neglect protection mechanisms. However, a real - time operating system cannot...allocation mechanism should adhere to application constraints. This strong relationship between a real - time operating system and the application

  16. Portable data flow in UNIX

    NASA Astrophysics Data System (ADS)

    Fox, R.; Molen, A. Vander; Hannuschke, S.

    1994-02-01

    We describe the dataflow of a nuclear physics data acquisition system. The system features a high speed active routing subsystem which allows an arbitrary number of data producers to contribute data to the system. Data are then routed to an arbitrary number of data consumers. Low overhead route-by-reference mechanisms are used to allow high rate operations. The system has been ported to a variety of UNIX systems. Timings are given for the routing component of the system on several systems. Finally, we give an example of a set of programs which can be added to the system to produce a complete data acquisition system.

  17. A performance analysis method for distributed real-time robotic systems: A case study of remote teleoperation

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Sanderson, A. C.

    1994-01-01

    Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.

  18. Distributed Systems: Interconnection and Fault Tolerance Studies

    DTIC Science & Technology

    1992-01-01

    real - time operating system , a number of new techniques have to be...problem is at the heart of a successful implementation of a real - time operating system in a distributed environment. Our studies of the issues...land, College Park MD 20742, January 1991. [i1] 6 lafur Gudmundsson, Daniel Moss6, Ashok K. Agrawala, and Satish K. Tripathi. MARUTI a hard real - time operating system .

  19. A High-Speed, Real-Time Visualization and State Estimation Platform for Monitoring and Control of Electric Distribution Systems: Implementation and Field Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta

    Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less

  20. Processor tradeoffs in distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Krishna, C. M.; Shin, Kang G.; Bhandari, Inderpal S.

    1987-01-01

    The problem of the optimization of the design of real-time distributed systems is examined with reference to a class of computer architectures similar to the continuously reconfigurable multiprocessor flight control system structure, CM2FCS. Particular attention is given to the impact of processor replacement and the burn-in time on the probability of dynamic failure and mean cost. The solution is obtained numerically and interpreted in the context of real-time applications.

  1. An approach to a real-time distribution system

    NASA Technical Reports Server (NTRS)

    Kittle, Frank P., Jr.; Paddock, Eddie J.; Pocklington, Tony; Wang, Lui

    1990-01-01

    The requirements of a real-time data distribution system are to provide fast, reliable delivery of data from source to destination with little or no impact to the data source. In this particular case, the data sources are inside an operational environment, the Mission Control Center (MCC), and any workstation receiving data directly from the operational computer must conform to the software standards of the MCC. In order to supply data to development workstations outside of the MCC, it is necessary to use gateway computers that prevent unauthorized data transfer back to the operational computers. Many software programs produced on the development workstations are targeted for real-time operation. Therefore, these programs must migrate from the development workstation to the operational workstation. It is yet another requirement for the Data Distribution System to ensure smooth transition of the data interfaces for the application developers. A standard data interface model has already been set up for the operational environment, so the interface between the distribution system and the application software was developed to match that model as closely as possible. The system as a whole therefore allows the rapid development of real-time applications without impacting the data sources. In summary, this approach to a real-time data distribution system provides development users outside of the MCC with an interface to MCC real-time data sources. In addition, the data interface was developed with a flexible and portable software design. This design allows for the smooth transition of new real-time applications to the MCC operational environment.

  2. Real-time visualization and quantification of retrograde cardioplegia delivery using near infrared fluorescent imaging.

    PubMed

    Rangaraj, Aravind T; Ghanta, Ravi K; Umakanthan, Ramanan; Soltesz, Edward G; Laurence, Rita G; Fox, John; Cohn, Lawrence H; Bolman, R M; Frangioni, John V; Chen, Frederick Y

    2008-01-01

    Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in five ex vivo normal porcine hearts and in five ex vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed retrograde cardioplegia, primarily distributed to the left ventricle (LV) and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior LV. This deficiency was compensated for with retrograde cardioplegia supplementation. Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated.

  3. Applications of the Theory of Distributed and Real Time Systems to the Development of Large-Scale Timing Based Systems.

    DTIC Science & Technology

    1996-04-01

    time systems . The focus is on the study of ’building-blocks’ for the construction of reliable and efficient systems. Our works falls into three...Members of MIT’s Theory of Distributed Systems group have continued their work on modelling, designing, verifying and analyzing distributed and real

  4. Monitoring Distributed Real-Time Systems: A Survey and Future Directions

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn E.; Pike, Lee

    2010-01-01

    Runtime monitors have been proposed as a means to increase the reliability of safety-critical systems. In particular, this report addresses runtime monitors for distributed hard real-time systems. This class of systems has had little attention from the monitoring community. The need for monitors is shown by discussing examples of avionic systems failure. We survey related work in the field of runtime monitoring. Several potential monitoring architectures for distributed real-time systems are presented along with a discussion of how they might be used to monitor properties of interest.

  5. Real-Time CORBA

    DTIC Science & Technology

    2000-10-01

    control systems and prototyped the approach by porting the ILU ORB from Xerox to the Lynx real - time operating system . They then provided a distributed...compliant real - time operating system , a real-time ORB, and an ODMG-compliant real-time ODBMS [12]. The MITRE system is an infrastructure for...the server’s local operating system can handle. For instance, on a node controlled by the VXWorks real - time operating system with 256 local

  6. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  7. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  8. US GEOLOGICAL SURVEY'S NATIONAL SYSTEM FOR PROCESSING AND DISTRIBUTION OF NEAR REAL-TIME HYDROLOGICAL DATA.

    USGS Publications Warehouse

    Shope, William G.; ,

    1987-01-01

    The US Geological Survey is utilizing a national network of more than 1000 satellite data-collection stations, four satellite-relay direct-readout ground stations, and more than 50 computers linked together in a private telecommunications network to acquire, process, and distribute hydrological data in near real-time. The four Survey offices operating a satellite direct-readout ground station provide near real-time hydrological data to computers located in other Survey offices through the Survey's Distributed Information System. The computerized distribution system permits automated data processing and distribution to be carried out in a timely manner under the control and operation of the Survey office responsible for the data-collection stations and for the dissemination of hydrological information to the water-data users.

  9. Real-time modeling of heat distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  10. EOS: A project to investigate the design and construction of real-time distributed embedded operating systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Essick, R. B.; Grass, J.; Johnston, G.; Kenny, K.; Russo, V.

    1986-01-01

    The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing.

  11. Real-time data flow and product generating for GNSS

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Caissy, Mark

    2004-01-01

    The last IGS workshop with the theme 'Towards Real-Time' resulted in the design of a prototype for real-time data and sharing within the IGS. A prototype real-time network is being established that will serve as a test bed for real-time activities within the IGS. We review the developments of the prototype and discuss some of the existing methods and related products of real-time GNSS systems. Recommendations are made concerning real-time data distribution and product generation.

  12. Smarter Grid Solutions Works with NREL to Enhance Grid-Hosting Capacity |

    Science.gov Websites

    autonomously manages, coordinates, and controls distributed energy resources in real time to maintain the coordination and real-time management of an entire distribution grid, subsuming the smart home and smart campus

  13. Stress Analysis and Fatigue Behaviour of PTFE-Bronze Layered Journal Bearing under Real-Time Dynamic Loading

    NASA Astrophysics Data System (ADS)

    Duman, M. S.; Kaplan, E.; Cuvalcı, O.

    2018-01-01

    The present paper is based on experimental studies and numerical simulations on the surface fatigue failure of the PTFE-bronze layered journal bearings under real-time loading. ‘Permaglide Plain Bearings P10’ type journal bearings were experimentally tested under different real time dynamic loadings by using real time journal bearing test system in our laboratory. The journal bearing consists of a PTFE-bronze layer approximately 0.32 mm thick on the steel support layer with 2.18 mm thick. Two different approaches have been considered with in experiments: (i) under real- time constant loading with varying bearing widths, (ii) under different real-time loadings at constant bearing widths. Fatigue regions, micro-crack dispersion and stress distributions occurred at the journal bearing were experimentally and theoretically investigated. The relation between fatigue region and pressure distributions were investigated by determining the circumferential pressure distribution under real-time dynamic loadings for the position of every 10° crank angles. In the theoretical part; stress and deformation distributions at the surface of the journal bearing analysed by using finite element methods to determine the relationship between stress and fatigue behaviour. As a result of this study, the maximum oil pressure and fatigue cracks were observed in the most heavily loaded regions of the bearing surface. Experimental results show that PTFE-Bronze layered journal bearings fatigue behaviour is better than the bearings include white metal alloy.

  14. An Environment for Incremental Development of Distributed Extensible Asynchronous Real-time Systems

    NASA Technical Reports Server (NTRS)

    Ames, Charles K.; Burleigh, Scott; Briggs, Hugh C.; Auernheimer, Brent

    1996-01-01

    Incremental parallel development of distributed real-time systems is difficult. Architectural techniques and software tools developed at the Jet Propulsion Laboratory's (JPL's) Flight System Testbed make feasible the integration of complex systems in various stages of development.

  15. Real-time Visualization and Quantification of Retrograde Cardioplegia Delivery using Near Infrared Fluorescent Imaging

    PubMed Central

    Rangaraj, Aravind T.; Ghanta, Ravi K.; Umakanthan, Ramanan; Soltesz, Edward G.; Laurence, Rita G.; Fox, John; Cohn, Lawrence H.; Bolman, R. M.; Frangioni, John V.; Chen, Frederick Y.

    2009-01-01

    Background and Aim of the Study Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. Methods A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in 5 ex-vivo normal porcine hearts and in 5 ex-vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. Results The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed that retrograde cardioplegia primarily distributed to the left ventricle and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior left ventricle. This deficiency was compensated for with retrograde cardioplegia supplementation. Conclusions Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated. PMID:19016995

  16. A Distributed Operating System for BMD Applications.

    DTIC Science & Technology

    1982-01-01

    Defense) applications executing on distributed hardware with local and shared memories. The objective was to develop real - time operating system functions...make the Basic Real - Time Operating System , and the set of new EPL language primitives that provide BMD application processes with efficient mechanisms

  17. Real-time distributed multimedia systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahurkar, S.S.; Bourbakis, N.G.

    1996-12-31

    This paper presents a survey on distributed multimedia systems and discusses real-time issues. In particular, different subsystems are reviewed that impact on multimedia networking, the networking for multimedia, the networked multimedia systems, and the leading edge research and developments efforts and issues in networking.

  18. A High-Speed Design of Montgomery Multiplier

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Ikenaga, Takeshi; Goto, Satoshi

    With the increase of key length used in public cryptographic algorithms such as RSA and ECC, the speed of Montgomery multiplication becomes a bottleneck. This paper proposes a high speed design of Montgomery multiplier. Firstly, a modified scalable high-radix Montgomery algorithm is proposed to reduce critical path. Secondly, a high-radix clock-saving dataflow is proposed to support high-radix operation and one clock cycle delay in dataflow. Finally, a hardware-reused architecture is proposed to reduce the hardware cost and a parallel radix-16 design of data path is proposed to accelerate the speed. By using HHNEC 0.25μm standard cell library, the implementation results show that the total cost of Montgomery multiplier is 130 KGates, the clock frequency is 180MHz and the throughput of 1024-bit RSA encryption is 352kbps. This design is suitable to be used in high speed RSA or ECC encryption/decryption. As a scalable design, it supports any key-length encryption/decryption up to the size of on-chip memory.

  19. Visualization and Analysis for Near-Real-Time Decision Making in Distributed Workflows

    DOE PAGES

    Pugmire, David; Kress, James; Choi, Jong; ...

    2016-08-04

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  20. The Real-Time ObjectAgent Software Architecture for Distributed Satellite Systems

    DTIC Science & Technology

    2001-01-01

    real - time operating system selection are also discussed. The fourth section describes a simple demonstration of real-time ObjectAgent. Finally, the...experience with C++. After selecting the programming language, it was necessary to select a target real - time operating system (RTOS) and embedded...ObjectAgent software to run on the OSE Real Time Operating System . In addition, she is responsible for the integration of ObjectAgent

  1. The Application Research of Modern Intelligent Cold Chain Distribution System Based on Internet of Things Technology

    NASA Astrophysics Data System (ADS)

    Fan, Dehui; Gao, Shan

    This paper implemented an intelligent cold chain distribution system based on the technology of Internet of things, and took the protoplasmic beer logistics transport system as example. It realized the remote real-time monitoring material status, recorded the distribution information, dynamically adjusted the distribution tasks and other functions. At the same time, the system combined the Internet of things technology with weighted filtering algorithm, realized the real-time query of condition curve, emergency alarming, distribution data retrieval, intelligent distribution task arrangement, etc. According to the actual test, it can realize the optimization of inventory structure, and improve the efficiency of cold chain distribution.

  2. A framework for building real-time expert systems

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel

    1991-01-01

    The Space Station Freedom is an example of complex systems that require both traditional and artificial intelligence (AI) real-time methodologies. It was mandated that Ada should be used for all new software development projects. The station also requires distributed processing. Catastrophic failures on the station can cause the transmission system to malfunction for a long period of time, during which ground-based expert systems cannot provide any assistance to the crisis situation on the station. This is even more critical for other NASA projects that would have longer transmission delays (e.g., the lunar base, Mars missions, etc.). To address these issues, a distributed agent architecture (DAA) is proposed that can support a variety of paradigms based on both traditional real-time computing and AI. The proposed testbed for DAA is an autonomous power expert (APEX) which is a real-time monitoring and diagnosis expert system for the electrical power distribution system of the space station.

  3. A distributed agent architecture for real-time knowledge-based systems: Real-time expert systems project, phase 1

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel

    1990-01-01

    We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods. AI techniques under consideration for reactive agents are approximate or anytime reasoning that can be implemented using Bayesian belief networks as in Guardian. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control.

  4. Time-Frequency Distribution Analyses of Ku-Band Radar Doppler Echo Signals

    NASA Astrophysics Data System (ADS)

    Bujaković, Dimitrije; Andrić, Milenko; Bondžulić, Boban; Mitrović, Srđan; Simić, Slobodan

    2015-03-01

    Real radar echo signals of a pedestrian, vehicle and group of helicopters are analyzed in order to maximize signal energy around central Doppler frequency in time-frequency plane. An optimization, preserving this concentration, is suggested based on three well-known concentration measures. Various window functions and time-frequency distributions were optimization inputs. Conducted experiments on an analytic and three real signals have shown that energy concentration significantly depends on used time-frequency distribution and window function, for all three used criteria.

  5. Real-time distributed fiber microphone based on phase-OTDR.

    PubMed

    Franciscangelis, Carolina; Margulis, Walter; Kjellberg, Leif; Soderquist, Ingemar; Fruett, Fabiano

    2016-12-26

    The use of an optical fiber as a real-time distributed microphone is demonstrated employing a phase-OTDR with direct detection. The method comprises a sample-and-hold circuit capable of both tuning the receiver to an arbitrary section of the fiber considered of interest and to recover in real-time the detected acoustic wave. The system allows listening to the sound of a sinusoidal disturbance with variable frequency, music and human voice with ~60 cm of spatial resolution through a 300 m long optical fiber.

  6. Conference on Real-Time Computer Applications in Nuclear, Particle and Plasma Physics, 6th, Williamsburg, VA, May 15-19, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Pordes, Ruth (Editor)

    1989-01-01

    Papers on real-time computer applications in nuclear, particle, and plasma physics are presented, covering topics such as expert systems tactics in testing FASTBUS segment interconnect modules, trigger control in a high energy physcis experiment, the FASTBUS read-out system for the Aleph time projection chamber, a multiprocessor data acquisition systems, DAQ software architecture for Aleph, a VME multiprocessor system for plasma control at the JT-60 upgrade, and a multiasking, multisinked, multiprocessor data acquisition front end. Other topics include real-time data reduction using a microVAX processor, a transputer based coprocessor for VEDAS, simulation of a macropipelined multi-CPU event processor for use in FASTBUS, a distributed VME control system for the LISA superconducting Linac, a distributed system for laboratory process automation, and a distributed system for laboratory process automation. Additional topics include a structure macro assembler for the event handler, a data acquisition and control system for Thomson scattering on ATF, remote procedure execution software for distributed systems, and a PC-based graphic display real-time particle beam uniformity.

  7. Extensions to the Parallel Real-Time Artificial Intelligence System (PRAIS) for fault-tolerant heterogeneous cycle-stealing reasoning

    NASA Technical Reports Server (NTRS)

    Goldstein, David

    1991-01-01

    Extensions to an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS) are discussed. PRAIS strives for transparently parallelizing production (rule-based) systems, even under real-time constraints. PRAIS accomplished these goals (presented at the first annual C Language Integrated Production System (CLIPS) conference) by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors. Results using the original PRAIS architecture over a network of Sun 3's, Sun 4's and VAX's are presented. Mechanisms using the producer-consumer model to extend the architecture for fault-tolerance and distributed truth maintenance initiation are also discussed.

  8. Real time data acquisition for expert systems in Unix workstations at Space Shuttle Mission Control

    NASA Technical Reports Server (NTRS)

    Muratore, John F.; Heindel, Troy A.; Murphy, Terri B.; Rasmussen, Arthur N.; Gnabasik, Mark; Mcfarland, Robert Z.; Bailey, Samuel A.

    1990-01-01

    A distributed system of proprietary engineering-class workstations is incorporated into NASA's Space Shuttle Mission-Control Center to increase the automation of mission control. The Real-Time Data System (RTDS) allows the operator to utilize expert knowledge in the display program for system modeling and evaluation. RTDS applications are reviewed including: (1) telemetry-animated communications schematics; (2) workstation displays of systems such as the Space Shuttle remote manipulator; and (3) a workstation emulation of shuttle flight instrumentation. The hard and soft real-time constraints are described including computer data acquisition, and the support techniques for the real-time expert systems include major frame buffers for logging and distribution as well as noise filtering. The incorporation of the workstations allows smaller programming teams to implement real-time telemetry systems that can improve operations and flight testing.

  9. Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Tian, Xiaolin; He, Xiaoliang; Song, Xiaojun; Xue, Liang; Liu, Cheng; Wang, Shouyu

    2016-08-01

    Microscopy based on transport of intensity equation provides quantitative phase distributions which opens another perspective for cellular observations. However, it requires multi-focal image capturing while mechanical and electrical scanning limits its real time capacity in sample detections. Here, in order to break through this restriction, real time quantitative phase microscopy based on single-shot transport of the intensity equation method is proposed. A programmed phase mask is designed to realize simultaneous multi-focal image recording without any scanning; thus, phase distributions can be quantitatively retrieved in real time. It is believed the proposed method can be potentially applied in various biological and medical applications, especially for live cell imaging.

  10. Dataflow Computation for the J-Machine

    DTIC Science & Technology

    1990-06-01

    MOVE 8. 1 CALL ClrTVCTO1 ;((:LkBEL (:LITERAL (:SYIBOL : BBD -IF-4)))) ZIDIF.4: ROVE [1,133, 3.3 ROV 13. A2 ((:TERIXATM)) SUSPEND ;((:LAEL (:LITBUAL...deftostant syn 0) (detconstant int-tag ’int) (detconatant Int 1) (detconstant id-tag ’ td ) (defconstant td 9) (Aotconstaut boolean-tag lbool

  11. Turtle Graphics Implementation Using a Graphical Dataflow Programming Approach

    DTIC Science & Technology

    1992-09-01

    this research. The intent of this section is not to teach how to program in LOGO, with the use of Turtle Graphics, but simply to provide an... how to program in Prograph, but only to provide a basic understanding the Prograph language, and its programming envi- ronment. Several examples are

  12. Dataflow Integration and Simulation Techniques for DSP System Design Tools

    DTIC Science & Technology

    2007-01-01

    Lebak, M. Richards , and D. Campbell, “VSIPL: An object-based open standard API for vector, signal, and image processing,” in Proceedings of the...Inc., document Version 0.98a. [56] P. Marwedel and G. Goossens , Eds., Code Generation for Embedded Processors. Kluwer Academic Publishers, 1995. [57

  13. Change Semantic Constrained Online Data Cleaning Method for Real-Time Observational Data Stream

    NASA Astrophysics Data System (ADS)

    Ding, Yulin; Lin, Hui; Li, Rongrong

    2016-06-01

    Recent breakthroughs in sensor networks have made it possible to collect and assemble increasing amounts of real-time observational data by observing dynamic phenomena at previously impossible time and space scales. Real-time observational data streams present potentially profound opportunities for real-time applications in disaster mitigation and emergency response, by providing accurate and timeliness estimates of environment's status. However, the data are always subject to inevitable anomalies (including errors and anomalous changes/events) caused by various effects produced by the environment they are monitoring. The "big but dirty" real-time observational data streams can rarely achieve their full potential in the following real-time models or applications due to the low data quality. Therefore, timely and meaningful online data cleaning is a necessary pre-requisite step to ensure the quality, reliability, and timeliness of the real-time observational data. In general, a straightforward streaming data cleaning approach, is to define various types of models/classifiers representing normal behavior of sensor data streams and then declare any deviation from this model as normal or erroneous data. The effectiveness of these models is affected by dynamic changes of deployed environments. Due to the changing nature of the complicated process being observed, real-time observational data is characterized by diversity and dynamic, showing a typical Big (Geo) Data characters. Dynamics and diversity is not only reflected in the data values, but also reflected in the complicated changing patterns of the data distributions. This means the pattern of the real-time observational data distribution is not stationary or static but changing and dynamic. After the data pattern changed, it is necessary to adapt the model over time to cope with the changing patterns of real-time data streams. Otherwise, the model will not fit the following observational data streams, which may led to large estimation error. In order to achieve the best generalization error, it is an important challenge for the data cleaning methodology to be able to characterize the behavior of data stream distributions and adaptively update a model to include new information and remove old information. However, the complicated data changing property invalidates traditional data cleaning methods, which rely on the assumption of a stationary data distribution, and drives the need for more dynamic and adaptive online data cleaning methods. To overcome these shortcomings, this paper presents a change semantics constrained online filtering method for real-time observational data. Based on the principle that the filter parameter should vary in accordance to the data change patterns, this paper embeds semantic description, which quantitatively depicts the change patterns in the data distribution to self-adapt the filter parameter automatically. Real-time observational water level data streams of different precipitation scenarios are selected for testing. Experimental results prove that by means of this method, more accurate and reliable water level information can be available, which is prior to scientific and prompt flood assessment and decision-making.

  14. An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing

    DTIC Science & Technology

    2002-08-01

    simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital

  15. PILOT: An intelligent distributed operations support system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Arthur N.

    1993-01-01

    The Real-Time Data System (RTDS) project is exploring the application of advanced technologies to the real-time flight operations environment of the Mission Control Centers at NASA's Johnson Space Center. The system, based on a network of engineering workstations, provides services such as delivery of real time telemetry data to flight control applications. To automate the operation of this complex distributed environment, a facility called PILOT (Process Integrity Level and Operation Tracker) is being developed. PILOT comprises a set of distributed agents cooperating with a rule-based expert system; together they monitor process operation and data flows throughout the RTDS network. The goal of PILOT is to provide unattended management and automated operation under user control.

  16. A Custom Data Logger for Real-Time Remote Field Data Collections

    DTIC Science & Technology

    2017-03-01

    ERDC/CHL CHETN-VI-46 March 2017 Approved for public release; distribution is unlimited. A Custom Data Logger for Real- Time Remote Field Data...Field Research Facility (FRF), for remote real- time data collections. This custom data logger is compact and energy efficient but has the same...INTRODUCTION: Real- time data collections offer many advantages: 1. Instrument failures can be rapidly detected and repaired, thereby minimizing

  17. A Programmer’s Assistant for a Special-Purpose Dataflow Language.

    DTIC Science & Technology

    1985-12-01

    valueclasscheck ’strict)) load-qda-kbs Loads the 6DA knowledge bases (defun Ioad-qda-kbs 0) Idolist (kb foda -kbst) (kbload (strino-append ’host-dir...DeMarco, T., "Structured Analysis and System Specification," GUIDE 47 Proceedings, 1978. Reprinted in Classics in Software Engineering, edited by Edward

  18. Coupling Visualization, Simulation, and Deep Learning for Ensemble Steering of Complex Energy Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, Kristin C; Brunhart-Lupo, Nicholas J; Bush, Brian W

    We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically soundmore » esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.« less

  19. Software Tool Integrating Data Flow Diagrams and Petri Nets

    NASA Technical Reports Server (NTRS)

    Thronesbery, Carroll; Tavana, Madjid

    2010-01-01

    Data Flow Diagram - Petri Net (DFPN) is a software tool for analyzing other software to be developed. The full name of this program reflects its design, which combines the benefit of data-flow diagrams (which are typically favored by software analysts) with the power and precision of Petri-net models, without requiring specialized Petri-net training. (A Petri net is a particular type of directed graph, a description of which would exceed the scope of this article.) DFPN assists a software analyst in drawing and specifying a data-flow diagram, then translates the diagram into a Petri net, then enables graphical tracing of execution paths through the Petri net for verification, by the end user, of the properties of the software to be developed. In comparison with prior means of verifying the properties of software to be developed, DFPN makes verification by the end user more nearly certain, thereby making it easier to identify and correct misconceptions earlier in the development process, when correction is less expensive. After the verification by the end user, DFPN generates a printable system specification in the form of descriptions of processes and data.

  20. Continuous high speed coherent one-way quantum key distribution.

    PubMed

    Stucki, Damien; Barreiro, Claudio; Fasel, Sylvain; Gautier, Jean-Daniel; Gay, Olivier; Gisin, Nicolas; Thew, Rob; Thoma, Yann; Trinkler, Patrick; Vannel, Fabien; Zbinden, Hugo

    2009-08-03

    Quantum key distribution (QKD) is the first commercial quantum technology operating at the level of single quanta and is a leading light for quantum-enabled photonic technologies. However, controlling these quantum optical systems in real world environments presents significant challenges. For the first time, we have brought together three key concepts for future QKD systems: a simple high-speed protocol; high performance detection; and integration both, at the component level and for standard fibre network connectivity. The QKD system is capable of continuous and autonomous operation, generating secret keys in real time. Laboratory and field tests were performed and comparisons made with robust InGaAs avalanche photodiodes and superconducting detectors. We report the first real world implementation of a fully functional QKD system over a 43 dB-loss (150 km) transmission line in the Swisscom fibre optic network where we obtained average real-time distribution rates over 3 hours of 2.5 bps.

  1. Generalized Ultrametric Semilattices of Linear Signals

    DTIC Science & Technology

    2014-01-23

    53–73, 1998. [8] John C. Eidson , Edward A. Lee, Slobodan Matic, Sanjit A. Seshia, and Jia Zou. Distributed real- time software for cyber-physical...Theoretical Computer Science, 16(1):5–24, 1981. 41 [37] Yang Zhao, Jie Liu, and Edward A. Lee. A programming model for time - synchronized distributed real...response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and

  2. Real-time measurements of jet aircraft engine exhaust.

    PubMed

    Rogers, Fred; Arnott, Pat; Zielinska, Barbara; Sagebiel, John; Kelly, Kerry E; Wagner, David; Lighty, JoAnn S; Sarofim, Adel F

    2005-05-01

    Particulate-phase exhaust properties from two different types of ground-based jet aircraft engines--high-thrust and turboshaft--were studied with real-time instruments on a portable pallet and additional time-integrated sampling devices. The real-time instruments successfully characterized rapidly changing particulate mass, light absorption, and polycyclic aromatic hydrocarbon (PAH) content. The integrated measurements included particulate-size distributions, PAH, and carbon concentrations for an entire test run (i.e., "run-integrated" measurements). In all cases, the particle-size distributions showed single modes peaking at 20-40nm diameter. Measurements of exhaust from high-thrust F404 engines showed relatively low-light absorption compared with exhaust from a turboshaft engine. Particulate-phase PAH measurements generally varied in phase with both net particulate mass and with light-absorbing particulate concentrations. Unexplained response behavior sometimes occurred with the real-time PAH analyzer, although on average the real-time and integrated PAH methods agreed within the same order of magnitude found in earlier investigations.

  3. Real-time hierarchically distributed processing network interaction simulation

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Wu, C.

    1987-01-01

    The Telerobot Testbed is a hierarchically distributed processing system which is linked together through a standard, commercial Ethernet. Standard Ethernet systems are primarily designed to manage non-real-time information transfer. Therefore, collisions on the net (i.e., two or more sources attempting to send data at the same time) are managed by randomly rescheduling one of the sources to retransmit at a later time interval. Although acceptable for transmitting noncritical data such as mail, this particular feature is unacceptable for real-time hierarchical command and control systems such as the Telerobot. Data transfer and scheduling simulations, such as token ring, offer solutions to collision management, but do not appropriately characterize real-time data transfer/interactions for robotic systems. Therefore, models like these do not provide a viable simulation environment for understanding real-time network loading. A real-time network loading model is being developed which allows processor-to-processor interactions to be simulated, collisions (and respective probabilities) to be logged, collision-prone areas to be identified, and network control variable adjustments to be reentered as a means of examining and reducing collision-prone regimes that occur in the process of simulating a complete task sequence.

  4. Distributed systems status and control

    NASA Technical Reports Server (NTRS)

    Kreidler, David; Vickers, David

    1990-01-01

    Concepts are investigated for an automated status and control system for a distributed processing environment. System characteristics, data requirements for health assessment, data acquisition methods, system diagnosis methods and control methods were investigated in an attempt to determine the high-level requirements for a system which can be used to assess the health of a distributed processing system and implement control procedures to maintain an accepted level of health for the system. A potential concept for automated status and control includes the use of expert system techniques to assess the health of the system, detect and diagnose faults, and initiate or recommend actions to correct the faults. Therefore, this research included the investigation of methods by which expert systems were developed for real-time environments and distributed systems. The focus is on the features required by real-time expert systems and the tools available to develop real-time expert systems.

  5. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK

    PubMed Central

    2014-01-01

    Background Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system’s set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This “code-based” approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. Results As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. Conclusions The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts. PMID:24725437

  6. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK.

    PubMed

    Wang, Kaier; Steyn-Ross, Moira L; Steyn-Ross, D Alistair; Wilson, Marcus T; Sleigh, Jamie W; Shiraishi, Yoichi

    2014-04-11

    Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system's set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This "code-based" approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts.

  7. Software Tools for Formal Specification and Verification of Distributed Real-Time Systems

    DTIC Science & Technology

    1994-07-29

    time systems and to evaluate the design. The evaluation of the design includes investigation of both the capability and potential usefulness of the toolkit environment and the feasibility of its implementation....The goals of Phase 1 are to design in detail a toolkit environment based on formal methods for the specification and verification of distributed real

  8. PERTS: A Prototyping Environment for Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Liu, Jane W. S.; Lin, Kwei-Jay; Liu, C. L.

    1991-01-01

    We discuss an ongoing project to build a Prototyping Environment for Real-Time Systems, called PERTS. PERTS is a unique prototyping environment in that it has (1) tools and performance models for the analysis and evaluation of real-time prototype systems, (2) building blocks for flexible real-time programs and the support system software, (3) basic building blocks of distributed and intelligent real time applications, and (4) an execution environment. PERTS will make the recent and future theoretical advances in real-time system design and engineering readily usable to practitioners. In particular, it will provide an environment for the use and evaluation of new design approaches, for experimentation with alternative system building blocks and for the analysis and performance profiling of prototype real-time systems.

  9. Reducing lumber thickness variation using real-time statistical process control

    Treesearch

    Thomas M. Young; Brian H. Bond; Jan Wiedenbeck

    2002-01-01

    A technology feasibility study for reducing lumber thickness variation was conducted from April 2001 until March 2002 at two sawmills located in the southern U.S. A real-time statistical process control (SPC) system was developed that featured Wonderware human machine interface technology (HMI) with distributed real-time control charts for all sawing centers and...

  10. CORDIC-based digital signal processing (DSP) element for adaptive signal processing

    NASA Astrophysics Data System (ADS)

    Bolstad, Gregory D.; Neeld, Kenneth B.

    1995-04-01

    The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.

  11. The Impact of Programming Experience on Successfully Learning Systems Analysis and Design

    ERIC Educational Resources Information Center

    Wong, Wang-chan

    2015-01-01

    In this paper, the author reports the results of an empirical study on the relationship between a student's programming experience and their success in a traditional Systems Analysis and Design (SA&D) class where technical skills such as dataflow analysis and entity relationship data modeling are covered. While it is possible to teach these…

  12. Functional language and data flow architectures

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Patel, D. R.; Lang, T.

    1983-01-01

    This is a tutorial article about language and architecture approaches for highly concurrent computer systems based on the functional style of programming. The discussion concentrates on the basic aspects of functional languages, and sequencing models such as data-flow, demand-driven and reduction which are essential at the machine organization level. Several examples of highly concurrent machines are described.

  13. A Simple Example of an SADMT (SDI-Strategic Defense Initiative) Architecture Dataflow Modeling Technique) Architecture Specification. Version 1.5.

    DTIC Science & Technology

    1988-04-21

    Layton Senior Software Engineer Martin Marietta Denver Aerospace MS L0425 P.O. Box 179 Denver, CO 80201 Larry L. Lehman Integrated Systems Inc. 2500...Mission College Road Santa Clara, CA 95054 Eric Leighninger Dynamics Research 60 Frontage Road Andover, MA 01810 . Peter Lempp Software Products and

  14. Super-Resolution of Multi-Pixel and Sub-Pixel Images for the SDI

    DTIC Science & Technology

    1993-06-08

    where the phase of the transmitted signal is not needed. The Wigner - Ville distribution ( WVD ) of a real signal s(t), associated with the complex...B. Boashash, 0. P. Kenny and H. J. Whitehouse, "Radar imaging using the Wigner - Ville distribution ", in Real-Time Signal Processing, J. P. Letellier...analytic signal z(t), is a time- frequency distribution defined as-’- 00 W(tf) Z (~t + ) t- -)exp(-i2nft) . (45) Note that the WVD is the double Fourier

  15. The R-Shell approach - Using scheduling agents in complex distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre

    1993-01-01

    Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.

  16. Precision Timed Infrastructure: Design Challenges

    DTIC Science & Technology

    2013-09-19

    timing constructs Clock synchronization and communication PRET Machines Other Platforms Fig. 1. Conceptual overview of translation steps between...2002. [3] A. Benveniste and G. Berry. The Synchronous Approach to Reactive and Real- Time Systems. Proceedings of the IEEE, 79(9):1270–1282, 1991. [4] D...and E. Lee. A programming model for time - synchronized distributed real- time systems. In Real Time and Embedded Technology and Applications Symposium, 2007. RTAS’07. 13th IEEE, pages

  17. Coordinated scheduling for dynamic real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei

    1994-01-01

    In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.

  18. Injection moulded microneedle sensor for real-time wireless pH monitoring.

    PubMed

    Mirza, Khalid B; Zuliani, Claudio; Hou, Benjamin; Ng, Fu Siong; Peters, Nicholas S; Toumazou, Christofer

    2017-07-01

    This paper describes the development of an array of individually addressable pH sensitive microneedles using injection moulding and their integration within a portable device for real-time wireless recording of pH distributions in biological samples. The fabricated microneedles are subjected to gold patterning followed by electrodeposition of iridium oxide to sensitize them to 0.07 units of pH change. Miniaturised electronics suitable for the sensors readout, analog-to-digital conversion and wireless transmission of the potentiometric data are embodied within the device, enabling it to measure real-time pH of soft biological samples such as muscles. In this paper, real-time recording of the cardiac pH distribution, during ischemia followed by reperfusion cycles in cardiac muscles of male Wistar rats has been demonstrated by using the microneedle array.

  19. The StarLite Project

    DTIC Science & Technology

    1988-09-01

    The current prototyping tool also provides a multiversion data object control mechanism. In a real-time database system, synchronization protocols...data in distributed real-time systems. The semantic informa- tion of read-only transactions is exploited for improved efficiency, and a multiversion ...are discussed. ." Index Terms: distributed system, replication, read-only transaction, consistency, multiversion . I’ I’ I’ 4. -9- I I I ° e% 4, 1

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  1. A computer assisted intelligent storm outage evaluator for power distribution systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balakrishnan, R.; Pahwa, A.

    1990-07-01

    The lower voltage part of the power distribution system (primary and secondary sub-systems) does not have the provision for real-time status feedback, and as a result evaluation of outages is an extremely difficult task, especially during system emergencies caused by tornadoes and ice-storms. In this paper, a knowledge based approach is proposed for evaluation of storm related outages in the distribution systems. At the outset, binary voltage sensors capable of transmitting the real-time voltage on/off symptoms are recommended to be installed at strategic locations in the distribution system.

  2. Commanding and Controlling Satellite Clusters (IEEE Intelligent Systems, November/December 2000)

    DTIC Science & Technology

    2000-01-01

    real - time operating system , a message-passing OS well suited for distributed...ground Flight processors ObjectAgent RTOS SCL RTOS RDMS Space command language Real - time operating system Rational database management system TS-21 RDMS...engineer with Princeton Satellite Systems. She is working with others to develop ObjectAgent software to run on the OSE Real Time Operating System .

  3. Generation of real-time global ionospheric map based on the global GNSS stations with only a sparse distribution

    NASA Astrophysics Data System (ADS)

    Li, Zishen; Wang, Ningbo; Li, Min; Zhou, Kai; Yuan, Yunbin; Yuan, Hong

    2017-04-01

    The Earth's ionosphere is part of the atmosphere stretching from an altitude of about 50 km to more than 1000 km. When the Global Navigation Satellite System (GNSS) signal emitted from a satellite travels through the ionosphere before reaches a receiver on or near the Earth surface, the GNSS signal is significantly delayed by the ionosphere and this delay bas been considered as one of the major errors in the GNSS measurement. The real-time global ionospheric map calculated from the real-time data obtained by global stations is an essential method for mitigating the ionospheric delay for real-time positioning. The generation of an accurate global ionospheric map generally depends on the global stations with dense distribution; however, the number of global stations that can produce the real-time data is very limited at present, which results that the generation of global ionospheric map with a high accuracy is very different when only using the current stations with real-time data. In view of this, a new approach is proposed for calculating the real-time global ionospheric map only based on the current stations with real-time data. This new approach is developed on the basis of the post-processing and the one-day predicted global ionospheric map from our research group. The performance of the proposed approach is tested by the current global stations with the real-time data and the test results are also compared with the IGS-released final global ionospheric map products.

  4. Real-Time MENTAT programming language and architecture

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; Silberman, Ami; Liu, Jane W. S.

    1989-01-01

    Real-time MENTAT, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as MENTAT. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The real-time MENTAT programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the real-time MENTAT system architecture and programming language constructs is provided.

  5. Virtual time and time warp on the JPL hypercube. [operating system implementation for distributed simulation

    NASA Technical Reports Server (NTRS)

    Jefferson, David; Beckman, Brian

    1986-01-01

    This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.

  6. Control and performance of the AGS and AGS Booster Main Magnet Power Supplies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reece, R.K.; Casella, R.; Culwick, B.

    1993-06-01

    Techniques for precision control of the main magnet power supplies for the AGS and AGS Booster synchrotron will be discussed. Both synchrotrons are designed to operate in a Pulse-to-Pulse Modulation (PPM) environment with a Supercycle Generator defining and distributing global timing events for the AGS Facility. Details of modelling, real-time feedback and feedforward systems, generation and distribution of real time field data, operational parameters and an overview of performance for both machines are included.

  7. Control and performance of the AGS and AGS Booster Main Magnet Power Supplies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reece, R.K.; Casella, R.; Culwick, B.

    1993-01-01

    Techniques for precision control of the main magnet power supplies for the AGS and AGS Booster synchrotron will be discussed. Both synchrotrons are designed to operate in a Pulse-to-Pulse Modulation (PPM) environment with a Supercycle Generator defining and distributing global timing events for the AGS Facility. Details of modelling, real-time feedback and feedforward systems, generation and distribution of real time field data, operational parameters and an overview of performance for both machines are included.

  8. D-MSR: a distributed network management scheme for real-time monitoring and process control applications in wireless industrial automation.

    PubMed

    Zand, Pouria; Dilo, Arta; Havinga, Paul

    2013-06-27

    Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead.

  9. D-MSR: A Distributed Network Management Scheme for Real-Time Monitoring and Process Control Applications in Wireless Industrial Automation

    PubMed Central

    Zand, Pouria; Dilo, Arta; Havinga, Paul

    2013-01-01

    Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead. PMID:23807687

  10. Resource Management for the Tagged Token Dataflow Architecture.

    DTIC Science & Technology

    1985-01-01

    completely rigorous, formulation of the U- intepreter . 2The graph schemata presented here differ slightly from those presented in the references...Director Dr. E.B. Royce, Code 38 1 Copy Head, Research Department Naval Weapons Center China Lake, CA 93555 Dr. G. Hopper, USNR 1 Ccpy NAVDAC-OOH .O Department of the Navy " - Washington, DC 20374 .. 0 " FILMED 7-85 DTIC

  11. Development and exemplification of a model for Teacher Assessment in Primary Science

    NASA Astrophysics Data System (ADS)

    Davies, D. J.; Earle, S.; McMahon, K.; Howe, A.; Collier, C.

    2017-09-01

    The Teacher Assessment in Primary Science project is funded by the Primary Science Teaching Trust and based at Bath Spa University. The study aims to develop a whole-school model of valid, reliable and manageable teacher assessment to inform practice and make a positive impact on primary-aged children's learning in science. The model is based on a data-flow 'pyramid' (analogous to the flow of energy through an ecosystem), whereby the rich formative assessment evidence gathered in the classroom is summarised for monitoring, reporting and evaluation purposes [Nuffield Foundation. (2012). Developing policy, principles and practice in primary school science assessment. London: Nuffield Foundation]. Using a design-based research (DBR) methodology, the authors worked in collaboration with teachers from project schools and other expert groups to refine, elaborate, validate and operationalise the data-flow 'pyramid' model, resulting in the development of a whole-school self-evaluation tool. In this paper, we argue that a DBR approach to theory-building and school improvement drawing upon teacher expertise has led to the identification, adaptation and successful scaling up of a promising approach to school self-evaluation in relation to assessment in science.

  12. Highlights of X-Stack ExM Deliverable Swift/T

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wozniak, Justin M.

    Swift/T is a key success from the ExM: System support for extreme-scale, many-task applications1 X-Stack project, which proposed to use concurrent dataflow as an innovative programming model to exploit extreme parallelism in exascale computers. The Swift/T component of the project reimplemented the Swift language from scratch to allow applications that compose scientific modules together to be build and run on available petascale computers (Blue Gene, Cray). Swift/T does this via a new compiler and runtime that generates and executes the application as an MPI program. We assume that mission-critical emerging exascale applications will be composed as scalable applications using existingmore » software components, connected by data dependencies. Developers wrap native code fragments using a higherlevel language, then build composite applications to form a computational experiment. This exemplifies hierarchical concurrency: lower-level messaging libraries are used for fine-grained parallelism; highlevel control is used for inter-task coordination. These patterns are best expressed with dataflow, but static DAGs (i.e., other workflow languages) limit the applications that can be built; they do not provide the expressiveness of Swift, such as conditional execution, iteration, and recursive functions.« less

  13. Reflexive reasoning for distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Goldstein, David

    1994-01-01

    This paper discusses the implementation and use of reflexive reasoning in real-time, distributed knowledge-based applications. Recently there has been a great deal of interest in agent-oriented systems. Implementing such systems implies a mechanism for sharing knowledge, goals and other state information among the agents. Our techniques facilitate an agent examining both state information about other agents and the parameters of the knowledge-based system shell implementing its reasoning algorithms. The shell implementing the reasoning is the Distributed Artificial Intelligence Toolkit, which is a derivative of CLIPS.

  14. Real-time flight test data distribution and display

    NASA Technical Reports Server (NTRS)

    Nesel, Michael C.; Hammons, Kevin R.

    1988-01-01

    Enhancements to the real-time processing and display systems of the NASA Western Aeronautical Test Range are described. Display processing has been moved out of the telemetry and radar acquisition processing systems super-minicomputers into user/client interactive graphic workstations. Real-time data is provided to the workstations by way of Ethernet. Future enhancement plans include use of fiber optic cable to replace the Ethernet.

  15. Venture Evaluation and Review Technique (VERT). Users’/Analysts’ Manual

    DTIC Science & Technology

    1979-10-01

    real world. Additionally, activity pro- cessing times could be entered as a normal, uniform or triangular distribution. Activity times can also be...work or tasks, or if the unit activities are such abstractions of the real world that the estimation of the time , cost and performance parameters for...utilized in that con- straining capacity. 7444 The network being processed has passed all the previous error checks. It currently has a real time

  16. The embedded operating system project

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.

    1985-01-01

    The design and construction of embedded operating systems for real-time advanced aerospace applications was investigated. The applications require reliable operating system support that must accommodate computer networks. Problems that arise in the construction of such operating systems, reconfiguration, consistency and recovery in a distributed system, and the issues of real-time processing are reported. A thesis that provides theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based system is included. The following items are addressed: (1) atomic actions and fault-tolerance issues; (2) operating system structure; (3) program development; (4) a reliable compiler for path Pascal; and (5) mediators, a mechanism for scheduling distributed system processes.

  17. Computer program compatible with a laser nephelometer

    NASA Technical Reports Server (NTRS)

    Paroskie, R. M.; Blau, H. H., Jr.; Blinn, J. C., III

    1975-01-01

    The laser nephelometer data system was updated to provide magnetic tape recording of data, and real time or near real time processing of data to provide particle size distribution and liquid water content. Digital circuits were provided to interface the laser nephelometer to a Data General Nova 1200 minicomputer. Communications are via a teletypewriter. A dual Linc Magnetic Tape System is used for program storage and data recording. Operational programs utilize the Data General Real-Time Operating System (RTOS) and the ERT AIRMAP Real-Time Operating System (ARTS). The programs provide for acquiring data from the laser nephelometer, acquiring data from auxiliary sources, keeping time, performing real time calculations, recording data and communicating with the teletypewriter.

  18. Field deployment to quantify the value of real-time information by integrating driver routing decisions and route assignment strategies.

    DOT National Transportation Integrated Search

    2014-05-01

    Advanced Traveler Information Systems (ATIS) have been proposed as a mechanism to generate and : distribute real-time travel information to drivers for the purpose of improving travel experience : represented by experienced travel time and enhancing ...

  19. Canine spontaneous glioma: A translational model system for convection-enhanced delivery

    PubMed Central

    Dickinson, Peter J.; LeCouteur, Richard A.; Higgins, Robert J.; Bringas, John R.; Larson, Richard F.; Yamashita, Yoji; Krauze, Michal T.; Forsayeth, John; Noble, Charles O.; Drummond, Daryl C.; Kirpotin, Dmitri B.; Park, John W.; Berger, Mitchel S.; Bankiewicz, Krystof S.

    2010-01-01

    Canine spontaneous intracranial tumors bear striking similarities to their human tumor counterparts and have the potential to provide a large animal model system for more realistic validation of novel therapies typically developed in small rodent models. We used spontaneously occurring canine gliomas to investigate the use of convection-enhanced delivery (CED) of liposomal nanoparticles, containing topoisomerase inhibitor CPT-11. To facilitate visualization of intratumoral infusions by real-time magnetic resonance imaging (MRI), we included identically formulated liposomes loaded with Gadoteridol. Real-time MRI defined distribution of infusate within both tumor and normal brain tissues. The most important limiting factor for volume of distribution within tumor tissue was the leakage of infusate into ventricular or subarachnoid spaces. Decreased tumor volume, tumor necrosis, and modulation of tumor phenotype correlated with volume of distribution of infusate (Vd), infusion location, and leakage as determined by real-time MRI and histopathology. This study demonstrates the potential for canine spontaneous gliomas as a model system for the validation and development of novel therapeutic strategies for human brain tumors. Data obtained from infusions monitored in real time in a large, spontaneous tumor may provide information, allowing more accurate prediction and optimization of infusion parameters. Variability in Vd between tumors strongly suggests that real-time imaging should be an essential component of CED therapeutic trials to allow minimization of inappropriate infusions and accurate assessment of clinical outcomes. PMID:20488958

  20. Detection of infusate leakage in the brain using real-time imaging of convection-enhanced delivery.

    PubMed

    Varenika, Vanja; Dickinson, Peter; Bringas, John; LeCouteur, Richard; Higgins, Robert; Park, John; Fiandaca, Massimo; Berger, Mitchel; Sampson, John; Bankiewicz, Krystof

    2008-11-01

    The authors have shown that convection-enhanced delivery (CED) of gadoteridol-loaded liposomes (GDLs) into different regions of normal monkey brain results in predictable, widespread distribution of this tracking agent as detected by real-time MR imaging. They also have found that this tracking technique allows monitoring of the distribution of similar nanosized agents such as therapeutic liposomes and viral vectors. A limitation of this procedure is the unexpected leakage of liposomes out of targeted parenchyma or malignancies into sulci and ventricles. The aim of the present study was to evaluate the efficacy of CED after the onset of these types of leakage. The authors documented this phenomenon in a study of 5 nonhuman primates and 7 canines, comprising 54 CED infusion sessions. Approximately 20% of these infusions resulted in leakage into cerebral ventricles or sulci. All of the infusions and leakage events were monitored with real-time MR imaging. The authors created volume-distributed versus volume-infused graphs for each infusion session. These graphs revealed the rate of distribution of GDL over the course of each infusion and allowed the authors to evaluate the progress of CED before and after leakage. The distribution of therapeutics within the target structure ceased to increase or resulted in significant attenuation after the onset of leakage. An analysis of the cases in this study revealed that leakage undermines the efficacy of CED. These findings reiterate the importance of real-time MR imaging visualization during CED to ensure an accurate, robust distribution of therapeutic agents.

  1. A multiprocessing architecture for real-time monitoring

    NASA Technical Reports Server (NTRS)

    Schmidt, James L.; Kao, Simon M.; Read, Jackson Y.; Weitzenkamp, Scott M.; Laffey, Thomas J.

    1988-01-01

    A multitasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques is described. To handle asynchronous inputs and perform in real time, the system consists of three or more distributed processes which run concurrently and communicate via a message passing scheme. The Data Management Process acquires, compresses, and routes the incoming sensor data to other processes. The Inference Process consists of a high performance inference engine that performs a real-time analysis on the state and health of the physical system. The I/O Process receives sensor data from the Data Management Process and status messages and recommendations from the Inference Process, updates its graphical displays in real time, and acts as the interface to the console operator. The distributed architecture has been interfaced to an actual spacecraft (NASA's Hubble Space Telescope) and is able to process the incoming telemetry in real-time (i.e., several hundred data changes per second). The system is being used in two locations for different purposes: (1) in Sunnyville, California at the Space Telescope Test Control Center it is used in the preflight testing of the vehicle; and (2) in Greenbelt, Maryland at NASA/Goddard it is being used on an experimental basis in flight operations for health and safety monitoring.

  2. Low-cost high performance distributed data storage for multi-channel observations

    NASA Astrophysics Data System (ADS)

    Liu, Ying-bo; Wang, Feng; Deng, Hui; Ji, Kai-fan; Dai, Wei; Wei, Shou-lin; Liang, Bo; Zhang, Xiao-li

    2015-10-01

    The New Vacuum Solar Telescope (NVST) is a 1-m solar telescope that aims to observe the fine structures in both the photosphere and the chromosphere of the Sun. The observational data acquired simultaneously from one channel for the chromosphere and two channels for the photosphere bring great challenges to the data storage of NVST. The multi-channel instruments of NVST, including scientific cameras and multi-band spectrometers, generate at least 3 terabytes data per day and require high access performance while storing massive short-exposure images. It is worth studying and implementing a storage system for NVST which would balance the data availability, access performance and the cost of development. In this paper, we build a distributed data storage system (DDSS) for NVST and then deeply evaluate the availability of real-time data storage on a distributed computing environment. The experimental results show that two factors, i.e., the number of concurrent read/write and the file size, are critically important for improving the performance of data access on a distributed environment. Referring to these two factors, three strategies for storing FITS files are presented and implemented to ensure the access performance of the DDSS under conditions of multi-host write and read simultaneously. The real applications of the DDSS proves that the system is capable of meeting the requirements of NVST real-time high performance observational data storage. Our study on the DDSS is the first attempt for modern astronomical telescope systems to store real-time observational data on a low-cost distributed system. The research results and corresponding techniques of the DDSS provide a new option for designing real-time massive astronomical data storage system and will be a reference for future astronomical data storage.

  3. A curriculum for real-time computer and control systems engineering

    NASA Technical Reports Server (NTRS)

    Halang, Wolfgang A.

    1990-01-01

    An outline of a syllabus for the education of real-time-systems engineers is given. This comprises the treatment of basic concepts, real-time software engineering, and programming in high-level real-time languages, real-time operating systems with special emphasis on such topics as task scheduling, hardware architectures, and especially distributed automation structures, process interfacing, system reliability and fault-tolerance, and integrated project development support systems. Accompanying course material and laboratory work are outlined, and suggestions for establishing a laboratory with advanced, but low-cost, hardware and software are provided. How the curriculum can be extended into a second semester is discussed, and areas for possible graduate research are listed. The suitable selection of a high-level real-time language and supporting operating system for teaching purposes is considered.

  4. Real time testing of intelligent relays for synchronous distributed generation islanding detection

    NASA Astrophysics Data System (ADS)

    Zhuang, Davy

    As electric power systems continue to grow to meet ever-increasing energy demand, their security, reliability, and sustainability requirements also become more stringent. The deployment of distributed energy resources (DER), including generation and storage, in conventional passive distribution feeders, gives rise to integration problems involving protection and unintentional islanding. Distributed generators need to be islanded for safety reasons when disconnected or isolated from the main feeder as distributed generator islanding may create hazards to utility and third-party personnel, and possibly damage the distribution system infrastructure, including the distributed generators. This thesis compares several key performance indicators of a newly developed intelligent islanding detection relay, against islanding detection devices currently used by the industry. The intelligent relay employs multivariable analysis and data mining methods to arrive at decision trees that contain both the protection handles and the settings. A test methodology is developed to assess the performance of these intelligent relays on a real time simulation environment using a generic model based on a real-life distribution feeder. The methodology demonstrates the applicability and potential advantages of the intelligent relay, by running a large number of tests, reflecting a multitude of system operating conditions. The testing indicates that the intelligent relay often outperforms frequency, voltage and rate of change of frequency relays currently used for islanding detection, while respecting the islanding detection time constraints imposed by standing distributed generator interconnection guidelines.

  5. A Distribution-class Locational Marginal Price (DLMP) Index for Enhanced Distribution Systems

    NASA Astrophysics Data System (ADS)

    Akinbode, Oluwaseyi Wemimo

    The smart grid initiative is the impetus behind changes that are expected to culminate into an enhanced distribution system with the communication and control infrastructure to support advanced distribution system applications and resources such as distributed generation, energy storage systems, and price responsive loads. This research proposes a distribution-class analog of the transmission LMP (DLMP) as an enabler of the advanced applications of the enhanced distribution system. The DLMP is envisioned as a control signal that can incentivize distribution system resources to behave optimally in a manner that benefits economic efficiency and system reliability and that can optimally couple the transmission and the distribution systems. The DLMP is calculated from a two-stage optimization problem; a transmission system OPF and a distribution system OPF. An iterative framework that ensures accurate representation of the distribution system's price sensitive resources for the transmission system problem and vice versa is developed and its convergence problem is discussed. As part of the DLMP calculation framework, a DCOPF formulation that endogenously captures the effect of real power losses is discussed. The formulation uses piecewise linear functions to approximate losses. This thesis explores, with theoretical proofs, the breakdown of the loss approximation technique when non-positive DLMPs/LMPs occur and discusses a mixed integer linear programming formulation that corrects the breakdown. The DLMP is numerically illustrated in traditional and enhanced distribution systems and its superiority to contemporary pricing mechanisms is demonstrated using price responsive loads. Results show that the impact of the inaccuracy of contemporary pricing schemes becomes significant as flexible resources increase. At high elasticity, aggregate load consumption deviated from the optimal consumption by up to about 45 percent when using a flat or time-of-use rate. Individual load consumption deviated by up to 25 percent when using a real-time price. The superiority of the DLMP is more pronounced when important distribution network conditions are not reflected by contemporary prices. The individual load consumption incentivized by the real-time price deviated by up to 90 percent from the optimal consumption in a congested distribution network. While the DLMP internalizes congestion management, the consumption incentivized by the real-time price caused overloads.

  6. Droplet digital polymerase chain reaction (PCR) outperforms real-time PCR in the detection of environmental DNA from an invasive fish species.

    PubMed

    Doi, Hideyuki; Takahara, Teruhiko; Minamoto, Toshifumi; Matsuhashi, Saeko; Uchii, Kimiko; Yamanaka, Hiroki

    2015-05-05

    Environmental DNA (eDNA) has been used to investigate species distributions in aquatic ecosystems. Most of these studies use real-time polymerase chain reaction (PCR) to detect eDNA in water; however, PCR amplification is often inhibited by the presence of organic and inorganic matter. In droplet digital PCR (ddPCR), the sample is partitioned into thousands of nanoliter droplets, and PCR inhibition may be reduced by the detection of the end-point of PCR amplification in each droplet, independent of the amplification efficiency. In addition, real-time PCR reagents can affect PCR amplification and consequently alter detection rates. We compared the effectiveness of ddPCR and real-time PCR using two different PCR reagents for the detection of the eDNA from invasive bluegill sunfish, Lepomis macrochirus, in ponds. We found that ddPCR had higher detection rates of bluegill eDNA in pond water than real-time PCR with either of the PCR reagents, especially at low DNA concentrations. Limits of DNA detection, which were tested by spiking the bluegill DNA to DNA extracts from the ponds containing natural inhibitors, found that ddPCR had higher detection rate than real-time PCR. Our results suggest that ddPCR is more resistant to the presence of PCR inhibitors in field samples than real-time PCR. Thus, ddPCR outperforms real-time PCR methods for detecting eDNA to document species distributions in natural habitats, especially in habitats with high concentrations of PCR inhibitors.

  7. Can Subjects be Guided to Optimal Decisions The Use of a Real-Time Training Intervention Model

    DTIC Science & Technology

    2016-06-01

    execution of the task and may then be analyzed to determine if there is correlation between designated factors (scores, proportion of time in each...state with their decision performance in real time could allow training systems to be designed to tailor training to the individual decision maker...release; distribution is unlimited CAN SUBJECTS BE GUIDED TO OPTIMAL DECISIONS? THE USE OF A REAL- TIME TRAINING INTERVENTION MODEL by Travis D

  8. LHCb Online event processing and filtering

    NASA Astrophysics Data System (ADS)

    Alessio, F.; Barandela, C.; Brarda, L.; Frank, M.; Franek, B.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Köstner, S.; Moine, G.; Neufeld, N.; Somogyi, P.; Stoica, R.; Suman, S.

    2008-07-01

    The first level trigger of LHCb accepts one million events per second. After preprocessing in custom FPGA-based boards these events are distributed to a large farm of PC-servers using a high-speed Gigabit Ethernet network. Synchronisation and event management is achieved by the Timing and Trigger system of LHCb. Due to the complex nature of the selection of B-events, which are the main interest of LHCb, a full event-readout is required. Event processing on the servers is parallelised on an event basis. The reduction factor is typically 1/500. The remaining events are forwarded to a formatting layer, where the raw data files are formed and temporarily stored. A small part of the events is also forwarded to a dedicated farm for calibration and monitoring. The files are subsequently shipped to the CERN Tier0 facility for permanent storage and from there to the various Tier1 sites for reconstruction. In parallel files are used by various monitoring and calibration processes running within the LHCb Online system. The entire data-flow is controlled and configured by means of a SCADA system and several databases. After an overview of the LHCb data acquisition and its design principles this paper will emphasize the LHCb event filter system, which is now implemented using the final hardware and will be ready for data-taking for the LHC startup. Control, configuration and security aspects will also be discussed.

  9. Theoretical Framework for Integrating Distributed Energy Resources into Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, Jianming; Wu, Di; Kalsi, Karanjit

    This paper focuses on developing a novel theoretical framework for effective coordination and control of a large number of distributed energy resources in distribution systems in order to more reliably manage the future U.S. electric power grid under the high penetration of renewable generation. The proposed framework provides a systematic view of the overall structure of the future distribution systems along with the underlying information flow, functional organization, and operational procedures. It is characterized by the features of being open, flexible and interoperable with the potential to support dynamic system configuration. Under the proposed framework, the energy consumption of variousmore » DERs is coordinated and controlled in a hierarchical way by using market-based approaches. The real-time voltage control is simultaneously considered to complement the real power control in order to keep nodal voltages stable within acceptable ranges during real time. In addition, computational challenges associated with the proposed framework are also discussed with recommended practices.« less

  10. Integrated modeling of storm drain and natural channel networks for real-time flash flood forecasting in large urban areas

    NASA Astrophysics Data System (ADS)

    Habibi, H.; Norouzi, A.; Habib, A.; Seo, D. J.

    2016-12-01

    To produce accurate predictions of flooding in urban areas, it is necessary to model both natural channel and storm drain networks. While there exist many urban hydraulic models of varying sophistication, most of them are not practical for real-time application for large urban areas. On the other hand, most distributed hydrologic models developed for real-time applications lack the ability to explicitly simulate storm drains. In this work, we develop a storm drain model that can be coupled with distributed hydrologic models such as the National Weather Service Hydrology Laboratory's Distributed Hydrologic Model, for real-time flash flood prediction in large urban areas to improve prediction and to advance the understanding of integrated response of natural channels and storm drains to rainfall events of varying magnitude and spatiotemporal extent in urban catchments of varying sizes. The initial study area is the Johnson Creek Catchment (40.1 km2) in the City of Arlington, TX. For observed rainfall, the high-resolution (500 m, 1 min) precipitation data from the Dallas-Fort Worth Demonstration Network of the Collaborative Adaptive Sensing of the Atmosphere radars is used.

  11. Pre-Results of the Real-Time ODIN Validation on MARTe Using Plasma Linearized Model in FTU Tokamak

    NASA Astrophysics Data System (ADS)

    Sadeghi, Yahya; Boncagni, Luca

    2012-06-01

    MARTe is a modular framework for real-time control aspects. At present time there are several MARTe systems under development at Frascati Tokamak Upgrade (Boncagni et al. in First steps in the FTU migration towards a modular and distributed real time control architecture based on MARTe and RTNet, 2010) such as the LH power percentage system, the gas puffing control system, the real-time ODIN plasma equilibrium reconstruction system and the position/current feedback control system (in a design phase) (Boncagni et al. in J Fusion Eng Design). The real-time reconstruction of magnetic flux in FTU tokamak is an important issue to estimate some quantities that can be use to control the plasma. This paper addresses the validation of real-time implementation of that task on MARTe.

  12. Integration of real-time mapping technology in disaster relief distribution.

    DOT National Transportation Integrated Search

    2013-02-01

    Vehicle routing for disaster relief distribution involves many challenges that distinguish this problem from those in commercial settings, given the time sensitive and resource constrained nature of relief activities. While operations research approa...

  13. An Internet Protocol-Based Software System for Real-Time, Closed-Loop, Multi-Spacecraft Mission Simulation Applications

    NASA Technical Reports Server (NTRS)

    Burns, Richard D.; Davis, George; Cary, Everett; Higinbotham, John; Hogie, Keith

    2003-01-01

    A mission simulation prototype for Distributed Space Systems has been constructed using existing developmental hardware and software testbeds at NASA s Goddard Space Flight Center. A locally distributed ensemble of testbeds, connected through the local area network, operates in real time and demonstrates the potential to assess the impact of subsystem level modifications on system level performance and, ultimately, on the quality and quantity of the end product science data.

  14. An Optimization Framework for Dynamic, Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara

    2003-01-01

    Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugmire, David; Kress, James; Choi, Jong

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  16. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  17. [Real-time three-dimensional (4D) ultrasound-guided prostatic biopsies on a phantom. Comparative study versus 2D guidance].

    PubMed

    Long, Jean-Alexandre; Daanen, Vincent; Moreau-Gaudry, Alexandre; Troccaz, Jocelyne; Rambeaud, Jean-Jacques; Descotes, Jean-Luc

    2007-11-01

    The objective of this study was to determine the added value of real-time three-dimensional (4D) ultrasound guidance of prostatic biopsies on a prostate phantom in terms of the precision of guidance and distribution. A prostate phantom was constructed. A real-time 3D ultrasonograph connected to a transrectal 5.9 MHz volumic transducer was used. Fourteen operators performed 336 biopsies with 2D guidance then 4D guidance according to a 12-biopsy protocol. Biopsy tracts were modelled by segmentation in a 3D ultrasound volume. Specific software allowed visualization of biopsy tracts in the reference prostate and evaluated the zone biopsied. A comparative study was performed to determine the added value of 4D guidance compared to 2D guidance by evaluating the precision of entry points and target points. The distribution was evaluated by measuring the volume investigated and by a redundancy ratio of the biopsy points. The precision of the biopsy protocol was significantly improved by 4D guidance (p = 0.037). No increase of the biopsy volume and no improvement of the distribution of biopsies were observed with 4D compared to 2D guidance. The real-time 3D ultrasound-guided prostate biopsy technique on a phantom model appears to improve the precision and reproducibility of a biopsy protocol, but the distribution of biopsies does not appear to be improved.

  18. Focused Logistics and Support for Force Projection in Force XXI and Beyond

    DTIC Science & Technology

    1999-12-09

    business system linking trading partners with point of sale demand and real time manufacturing for clothing items.17 Quick Response achieved $1.7...be able to determine the real - time status and supply requirements of units. With "distributed logistics system software model hosts൨ and active...location, quantity, condition, and movement of assets. The system is designed to be fully automated, operate in near- real time with an open-architecture

  19. Operational Data Quality Assessment of the Combined PBO, TLALOCNet and COCONet Real-Time GNSS Networks

    NASA Astrophysics Data System (ADS)

    Hodgkinson, K. M.; Mencin, D.; Fox, O.; Walls, C. P.; Mann, D.; Blume, F.; Berglund, H. T.; Phillips, D.; Meertens, C. M.; Mattioli, G. S.

    2015-12-01

    The GAGE facility, managed by UNAVCO, currently operates a network of ~460, real-time, high-rate GNSS stations (RT-GNSS). The majority of these RT stations are part of the Earthscope PBO network, which spans the western US Pacific North-American plate boundary. Approximately 50 are distributed throughout the Mexico and Caribbean region funded by the TLALOCNet and COCONet projects. The entire network is processed in real-time at UNAVCO using Precise Point Positioning (PPP). The real-time streams are freely available to all and user demand has grown almost exponentially since 2010. Data usage is multidisciplinary, including tectonic and volcanic deformation studies, meteorological applications, atmospheric science research in addition to use by national, state and commercial entities. 21 RT-GNSS sites in California now include 200-sps accelerometers for the development of Earthquake Early Warning systems. All categories of users of real-time streams have similar requirements, reliable, low-latency, high-rate, and complete data sets. To meet these requirements, UNAVCO tracks the latency and completeness of the incoming raw observations and also is developing tools to monitor the quality of the processed data streams. UNAVCO is currently assessing the precision, accuracy and latency of solutions from various PPP software packages. Also under review are the data formats UNAVCO distributes; for example, the PPP solutions are currently distributed in NMEA format, but other formats such as SEED or GeoJSON may be preferred by different user groups to achieve specific mission objectives. In this presentation we will share our experiences of the challenges involved in the data operations of a continental-scale, multi-project, real-time GNSS network, summarize the network's performance in terms of latency and completeness, and present the comparisons of PPP solutions using different PPP processing techniques.

  20. VERSE - Virtual Equivalent Real-time Simulation

    NASA Technical Reports Server (NTRS)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  1. On-Line Water Quality Parameters as Indicators of Distribution System Contamination

    EPA Science Inventory

    At a time when the safety and security of services we have typically taken for granted are under question, a real-time or near real-time method of monitoring changes in water quality parameters could provide a critical line of defense in protecting public health. This study was u...

  2. New consumer load prototype for electricity theft monitoring

    NASA Astrophysics Data System (ADS)

    Abdullateef, A. I.; Salami, M. J. E.; Musse, M. A.; Onasanya, M. A.; Alebiosu, M. I.

    2013-12-01

    Illegal connection which is direct connection to the distribution feeder and tampering of energy meter has been identified as a major process through which nefarious consumers steal electricity on low voltage distribution system. This has contributed enormously to the revenue losses incurred by the power and energy providers. A Consumer Load Prototype (CLP) is constructed and proposed in this study in order to understand the best possible pattern through which the stealing process is effected in real life power consumption. The construction of consumer load prototype will facilitate real time simulation and data collection for the monitoring and detection of electricity theft on low voltage distribution system. The prototype involves electrical design and construction of consumer loads with application of various standard regulations from Institution of Engineering and Technology (IET), formerly known as Institution of Electrical Engineers (IEE). LABVIEW platform was used for data acquisition and the data shows a good representation of the connected loads. The prototype will assist researchers and power utilities, currently facing challenges in getting real time data for the study and monitoring of electricity theft. The simulation of electricity theft in real time is one of the contributions of this prototype. Similarly, the power and energy community including students will appreciate the practical approach which the prototype provides for real time information rather than software simulation which has hitherto been used in the study of electricity theft.

  3. GENESIS SciFlo: Choreographing Interoperable Web Services on the Grid using a Semantically-Enabled Dataflow Execution Environment

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.

    2007-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* & Globus Alliance toolkits), and enables scientists to do multi- instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.

  4. Geographically distributed hybrid testing & collaboration between geotechnical centrifuge and structures laboratories

    NASA Astrophysics Data System (ADS)

    Ojaghi, Mobin; Martínez, Ignacio Lamata; Dietz, Matt S.; Williams, Martin S.; Blakeborough, Anthony; Crewe, Adam J.; Taylor, Colin A.; Madabhushi, S. P. Gopal; Haigh, Stuart K.

    2018-01-01

    Distributed Hybrid Testing (DHT) is an experimental technique designed to capitalise on advances in modern networking infrastructure to overcome traditional laboratory capacity limitations. By coupling the heterogeneous test apparatus and computational resources of geographically distributed laboratories, DHT provides the means to take on complex, multi-disciplinary challenges with new forms of communication and collaboration. To introduce the opportunity and practicability afforded by DHT, here an exemplar multi-site test is addressed in which a dedicated fibre network and suite of custom software is used to connect the geotechnical centrifuge at the University of Cambridge with a variety of structural dynamics loading apparatus at the University of Oxford and the University of Bristol. While centrifuge time-scaling prevents real-time rates of loading in this test, such experiments may be used to gain valuable insights into physical phenomena, test procedure and accuracy. These and other related experiments have led to the development of the real-time DHT technique and the creation of a flexible framework that aims to facilitate future distributed tests within the UK and beyond. As a further example, a real-time DHT experiment between structural labs using this framework for testing across the Internet is also presented.

  5. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  6. Real-time three-dimensional color Doppler echocardiography for characterizing the spatial velocity distribution and quantifying the peak flow rate in the left ventricular outflow tract

    NASA Technical Reports Server (NTRS)

    Tsujino, H.; Jones, M.; Shiota, T.; Qin, J. X.; Greenberg, N. L.; Cardon, L. A.; Morehead, A. J.; Zetts, A. D.; Travaglini, A.; Bauer, F.; hide

    2001-01-01

    Quantification of flow with pulsed-wave Doppler assumes a "flat" velocity profile in the left ventricular outflow tract (LVOT), which observation refutes. Recent development of real-time, three-dimensional (3-D) color Doppler allows one to obtain an entire cross-sectional velocity distribution of the LVOT, which is not possible using conventional 2-D echo. In an animal experiment, the cross-sectional color Doppler images of the LVOT at peak systole were derived and digitally transferred to a computer to visualize and quantify spatial velocity distributions and peak flow rates. Markedly skewed profiles, with higher velocities toward the septum, were consistently observed. Reference peak flow rates by electromagnetic flow meter correlated well with 3-D peak flow rates (r = 0.94), but with an anticipated underestimation. Real-time 3-D color Doppler echocardiography was capable of determining cross-sectional velocity distributions and peak flow rates, demonstrating the utility of this new method for better understanding and quantifying blood flow phenomena.

  7. Real Time Text Analysis

    NASA Astrophysics Data System (ADS)

    Senthilkumar, K.; Ruchika Mehra Vijayan, E.

    2017-11-01

    This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language

  8. Intercommunications in Real Time, Redundant, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Zanger, H.

    1980-01-01

    An investigation into the applicability of fiber optic communication techniques to real time avionic control systems, in particular the total automatic flight control system used for the VSTOL aircraft is presented. The system consists of spatially distributed microprocessors. The overall control function is partitioned to yield a unidirectional data flow between the processing elements (PE). System reliability is enhanced by the use of triple redundancy. Some general overall system specifications are listed here to provide the necessary background for the requirements of the communications system.

  9. A new generation of real-time DOS technology for mission-oriented system integration and operation

    NASA Technical Reports Server (NTRS)

    Jensen, E. Douglas

    1988-01-01

    Information is given on system integration and operation (SIO) requirements and a new generation of technical approaches for SIO. Real-time, distribution, survivability, and adaptability requirements and technical approaches are covered. An Alpha operating system program management overview is outlined.

  10. Principal Investigator Microgravity Services Role in ISS Acceleration Data Distribution

    NASA Technical Reports Server (NTRS)

    McPherson, Kevin

    1999-01-01

    Measurement of the microgravity acceleration environment on the International Space Station will be accomplished by two accelerometer systems. The Microgravity Acceleration Measurement System will record the quasi-steady microgravity environment, including the influences of aerodynamic drag, vehicle rotation, and venting effects. Measurement of the vibratory/transient regime comprised of vehicle, crew, and equipment disturbances will be accomplished by the Space Acceleration Measurement System-II. Due to the dynamic nature of the microgravity environment and its potential to influence sensitive experiments, Principal Investigators require distribution of microgravity acceleration in a timely and straightforward fashion. In addition to this timely distribution of the data, long term access to International Space Station microgravity environment acceleration data is required. The NASA Glenn Research Center's Principal Investigator Microgravity Services project will provide the means for real-time and post experiment distribution of microgravity acceleration data to microgravity science Principal Investigators. Real-time distribution of microgravity environment acceleration data will be accomplished via the World Wide Web. Data packets from the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System-II will be routed from onboard the International Space Station to the NASA Glenn Research Center's Telescience Support Center. Principal Investigator Microgravity Services' ground support equipment located at the Telescience Support Center will be capable of generating a standard suite of acceleration data displays, including various time domain and frequency domain options. These data displays will be updated in real-time and will periodically update images available via the Principal Investigator Microgravity Services web page.

  11. Building Software Agents for Planning, Monitoring, and Optimizing Travel

    DTIC Science & Technology

    2004-01-01

    defined as plans in the Theseus Agent Execution language (Barish et al. 2002). In the Web environment, sources can be quite slow and the latencies of...executor is based on a dataflow paradigm, actions are executed as soon as the data becomes available. Second, Theseus performs the actions in a...while Thesues provides an expressive language for defining information gathering and monitoring plans. The Theseus language supports capabilities

  12. High Temperature Tribometer. Phase 1

    DTIC Science & Technology

    1989-06-01

    13 Figure 2.3.2 Setpoint and Gain Windows in FW.EXE ......... . Figure 2.4.1 Data-Flow Diagram for Data-Acquisition Module ..... .. 23 I Figure...mounted in a friction force measuring device. Optimally , material testing results should not be test machine sensitiye; but due to equipment variables...fixed. The friction force due to sliding should be continuously measured. This is optimally done in conjunction with the normal force measurement via

  13. Software for the EVLA

    NASA Astrophysics Data System (ADS)

    Butler, Bryan J.; van Moorsel, Gustaaf; Tody, Doug

    2004-09-01

    The Expanded Very Large Array (EVLA) project is the next generation instrument for high resolution long-millimeter to short-meter wavelength radio astronomy. It is currently funded by NSF, with completion scheduled for 2012. The EVLA will upgrade the VLA with new feeds, receivers, data transmission hardware, correlator, and a new software system to enable the instrument to achieve its full potential. This software includes both that required for controlling and monitoring the instrument and that involved with the scientific dataflow. We concentrate here on a portion of the dataflow software, including: proposal preparation, submission, and handling; observation preparation, scheduling, and remote monitoring; data archiving; and data post-processing, including both automated (pipeline) and manual processing. The primary goals of the software are: to maximize the scientific return of the EVLA; provide ease of use, for both novices and experts; exploit commonality amongst all NRAO telescopes where possible. This last point is both a bane and a blessing: we are not at liberty to do whatever we want in the software, but on the other hand we may borrow from other projects (notably ALMA and GBT) where appropriate. The software design methodology includes detailed initial use-cases and requirements from the scientists, intimate interaction between the scientists and the programmers during design and implementation, and a thorough testing and acceptance plan.

  14. Power management and frequency regulation for microgrid and smart grid: A real-time demand response approach

    NASA Astrophysics Data System (ADS)

    Pourmousavi Kani, Seyyed Ali

    Future power systems (known as smart grid) will experience a high penetration level of variable distributed energy resources to bring abundant, affordable, clean, efficient, and reliable electric power to all consumers. However, it might suffer from the uncertain and variable nature of these generations in terms of reliability and especially providing required balancing reserves. In the current power system structure, balancing reserves (provided by spinning and non-spinning power generation units) usually are provided by conventional fossil-fueled power plants. However, such power plants are not the favorite option for the smart grid because of their low efficiency, high amount of emissions, and expensive capital investments on transmission and distribution facilities, to name a few. Providing regulation services in the presence of variable distributed energy resources would be even more difficult for islanded microgrids. The impact and effectiveness of demand response are still not clear at the distribution and transmission levels. In other words, there is no solid research reported in the literature on the evaluation of the impact of DR on power system dynamic performance. In order to address these issues, a real-time demand response approach along with real-time power management (specifically for microgrids) is proposed in this research. The real-time demand response solution is utilized at the transmission (through load-frequency control model) and distribution level (both in the islanded and grid-tied modes) to provide effective and fast regulation services for the stable operation of the power system. Then, multiple real-time power management algorithms for grid-tied and islanded microgrids are proposed to economically and effectively operate microgrids. Extensive dynamic modeling of generation, storage, and load as well as different controller design are considered and developed throughout this research to provide appropriate models and simulation environment to evaluate the effectiveness of the proposed methodologies. Simulation results revealed the effectiveness of the proposed methods in providing balancing reserves and microgrids' economic and stable operation. The proposed tools and approaches can significantly enhance the application of microgrids and demand response in the smart grid era. They will also help to increase the penetration level of variable distributed generation resources in the smart grid.

  15. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.

    1993-01-01

    The key elements in the 1992-93 period of the project are the following: (1) extensive use of the simulator to implement and test - concurrency control algorithms, interactive user interface, and replica control algorithms; and (2) investigations into the applicability of data and process replication in real-time systems. In the 1993-94 period of the project, we intend to accomplish the following: (1) concentrate on efforts to investigate the effects of data and process replication on hard and soft real-time systems - especially we will concentrate on the impact of semantic-based consistency control schemes on a distributed real-time system in terms of improved reliability, improved availability, better resource utilization, and reduced missed task deadlines; and (2) use the prototype to verify the theoretically predicted performance of locking protocols, etc.

  16. An Infrastructure for UML-Based Code Generation Tools

    NASA Astrophysics Data System (ADS)

    Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.

    The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.

  17. A distributed scheduling algorithm for heterogeneous real-time systems

    NASA Technical Reports Server (NTRS)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  18. Quantitative analysis of diet structure by real-time PCR, reveals different feeding patterns by two dominant grasshopper species

    PubMed Central

    Huang, Xunbing; Wu, Huihui; McNeill, Mark Richard; Qin, Xinghu; Ma, Jingchuan; Tu, Xiongbing; Cao, Guangchun; Wang, Guangjun; Nong, Xiangqun; Zhang, Zehua

    2016-01-01

    Studies on grasshopper diets have historically employed a range of methodologies, each with certain advantages and disadvantages. For example, some methodologies are qualitative instead of quantitative. Others require long experimental periods or examine population-level effects, only. In this study, we used real-time PCR to examine diets of individual grasshoppers. The method has the advantage of being both fast and quantitative. Using two grasshopper species, Oedaleus asiaticus and Dasyhippus barbipes, we designed ITS primer sequences for their three main host plants, Stipa krylovii, Leymus chinensis and Cleistogenes squarrosa and used real-time PCR method to test diet structure both qualitatively and quantitatively. The lowest detection efficiency of the three grass species was ~80% with a strong correlation between actual and PCR-measured food intake. We found that Oedaleus asiaticus maintained an unchanged diet structure across grasslands with different grass communities. By comparison, Dasyhippus barbipes changed its diet structure. These results revealed why O. asiaticus distribution is mainly confined to Stipa-dominated grassland, and D. barbipes is more widely distributed across Inner Mongolia. Overall, real-time PCR was shown to be a useful tool for investigating grasshopper diets, which in turn offers some insight into grasshopper distributions and improved pest management. PMID:27562455

  19. Real-time sensor validation and fusion for distributed autonomous sensors

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaojing; Li, Xiangshang; Buckles, Bill P.

    2004-04-01

    Multi-sensor data fusion has found widespread applications in industrial and research sectors. The purpose of real time multi-sensor data fusion is to dynamically estimate an improved system model from a set of different data sources, i.e., sensors. This paper presented a systematic and unified real time sensor validation and fusion framework (RTSVFF) based on distributed autonomous sensors. The RTSVFF is an open architecture which consists of four layers - the transaction layer, the process fusion layer, the control layer, and the planning layer. This paradigm facilitates distribution of intelligence to the sensor level and sharing of information among sensors, controllers, and other devices in the system. The openness of the architecture also provides a platform to test different sensor validation and fusion algorithms and thus facilitates the selection of near optimal algorithms for specific sensor fusion application. In the version of the model presented in this paper, confidence weighted averaging is employed to address the dynamic system state issue noted above. The state is computed using an adaptive estimator and dynamic validation curve for numeric data fusion and a robust diagnostic map for decision level qualitative fusion. The framework is then applied to automatic monitoring of a gas-turbine engine, including a performance comparison of the proposed real-time sensor fusion algorithms and a traditional numerical weighted average.

  20. Nowcast model for hazardous material spill prevention and response, San Francisco Bay, California

    USGS Publications Warehouse

    Cheng, Ralph T.; Wilmot, Wayne L.; Galt, Jerry A.

    1997-01-01

    The National Oceanic and Atmospheric Administration (NOAA) installed the Physical Oceanographic Real-time System (PORTS) in San Francisco Bay, California, to provide real-time observations of tides, tidal currents, and meteorological conditions to, among other purposes, guide hazardous material spill prevention and response. Integrated with nowcast modeling techniques and dissemination of real-time data and the nowcasting results through the Internet on the World Wide Web, emerging technologies used in PORTS for real-time data collection forms a nowcast modeling system. Users can download tides and tidal current distribution in San Francisco Bay for their specific applications and/or for further analysis.

  1. Real-Time Network Management

    DTIC Science & Technology

    1998-07-01

    Report No. WH97JR00-A002 Sponsored by REAL-TIME NETWORK MANAGEMENT FINAL TECHNICAL REPORT K CD July 1998 CO CO O W O Defense Advanced...Approved for public release; distribution unlimited. t^GquALmmsPEami Report No. WH97JR00-A002 REAL-TIME NETWORK MANAGEMENT Synectics Corporation...2.1.2.1 WAN-class Networks 12 2.1.2.2 IEEE 802.3-class Networks 13 2.2 Task 2 - Object Modeling for Architecture 14 2.2.1 Managed Objects 14 2.2.2

  2. Real-time UNIX in HEP data acquisition

    NASA Astrophysics Data System (ADS)

    Buono, S.; Gaponenko, I.; Jones, R.; Mapelli, L.; Mornacchi, G.; Prigent, D.; Sanchez-Corral, E.; Skiadelli, M.; Toppers, A.; Duval, P. Y.; Ferrato, D.; Le Van Suu, A.; Qian, Z.; Rondot, C.; Ambrosini, G.; Fumagalli, G.; Aguer, M.; Huet, M.

    1994-12-01

    Today's experimentation in high energy physics is characterized by an increasing need for sensitivity to rare phenomena and complex physics signatures, which require the use of huge and sophisticated detectors and consequently a high performance readout and data acquisition. Multi-level triggering, hierarchical data collection and an always increasing amount of processing power, distributed throughout the data acquisition layers, will impose a number of features on the software environment, especially the need for a high level of standardization. Real-time UNIX seems, today, the best solution for the platform independence, operating system interface standards and real-time features necessary for data acquisition in HEP experiments. We present the results of the evaluation, in a realistic application environment, of a Real-Time UNIX operating system: the EP/LX real-time UNIX system.

  3. Parallel-distributed mobile robot simulator

    NASA Astrophysics Data System (ADS)

    Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo

    1996-06-01

    The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.

  4. Advanced visualization platform for surgical operating room coordination: distributed video board system.

    PubMed

    Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas

    2006-06-01

    One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.

  5. Investigation of contact pressure and influence function model for soft wheel polishing.

    PubMed

    Rao, Zhimin; Guo, Bing; Zhao, Qingliang

    2015-09-20

    The tool influence function (TIF) is critical for calculating the dwell-time map to improve form accuracy. We present the TIF for the process of computer-controlled polishing with a soft polishing wheel. In this paper, the static TIF was developed based on the Preston equation. The pressure distribution was verified by the real removal spot section profiles. According to the experiment measurements, the pressure distribution simulated by Hertz contact theory was much larger than the real contact pressure. The simulated pressure distribution, which was modeled by the Winkler elastic foundation for a soft polishing wheel, matched the real contact pressure. A series of experiments was conducted to obtain the removal spot statistical properties for validating the relationship between material removal and processing time and contact pressure and relative velocity, along with calculating the fitted parameters to establish the TIF. The developed TIF predicted the removal character for the studied soft wheel polishing.

  6. Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS)

    NASA Astrophysics Data System (ADS)

    Daniels, M. D.; Graves, S. J.; Vernon, F.; Kerkez, B.; Chandra, C. V.; Keiser, K.; Martin, C.

    2014-12-01

    Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) Access, utilization and management of real-time data continue to be challenging for decision makers, as well as researchers in several scientific fields. This presentation will highlight infrastructure aimed at addressing some of the gaps in handling real-time data, particularly in increasing accessibility of these data to the scientific community through cloud services. The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) system addresses the ever-increasing importance of real-time scientific data, particularly in mission critical scenarios, where informed decisions must be made rapidly. Advances in the distribution of real-time data are leading many new transient phenomena in space-time to be observed, however real-time decision-making is infeasible in many cases that require streaming scientific data as these data are locked down and sent only to proprietary in-house tools or displays. This lack of accessibility to the broader scientific community prohibits algorithm development and workflows initiated by these data streams. As part of NSF's EarthCube initiative, CHORDS proposes to make real-time data available to the academic community via cloud services. The CHORDS infrastructure will enhance the role of real-time data within the geosciences, specifically expanding the potential of streaming data sources in enabling adaptive experimentation and real-time hypothesis testing. Adherence to community data and metadata standards will promote the integration of CHORDS real-time data with existing standards-compliant analysis, visualization and modeling tools.

  7. Online Monitoring System of Air Distribution in Pulverized Coal-Fired Boiler Based on Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Żymełka, Piotr; Nabagło, Daniel; Janda, Tomasz; Madejski, Paweł

    2017-12-01

    Balanced distribution of air in coal-fired boiler is one of the most important factors in the combustion process and is strongly connected to the overall system efficiency. Reliable and continuous information about combustion airflow and fuel rate is essential for achieving optimal stoichiometric ratio as well as efficient and safe operation of a boiler. Imbalances in air distribution result in reduced boiler efficiency, increased gas pollutant emission and operating problems, such as corrosion, slagging or fouling. Monitoring of air flow trends in boiler is an effective method for further analysis and can help to appoint important dependences and start optimization actions. Accurate real-time monitoring of the air distribution in boiler can bring economical, environmental and operational benefits. The paper presents a novel concept for online monitoring system of air distribution in coal-fired boiler based on real-time numerical calculations. The proposed mathematical model allows for identification of mass flow rates of secondary air to individual burners and to overfire air (OFA) nozzles. Numerical models of air and flue gas system were developed using software for power plant simulation. The correctness of the developed model was verified and validated with the reference measurement values. The presented numerical model for real-time monitoring of air distribution is capable of giving continuous determination of the complete air flows based on available digital communication system (DCS) data.

  8. Scalable and Accurate SMT-based Model Checking of Data Flow Systems

    DTIC Science & Technology

    2013-10-30

    guided by the semantics of the description language . In this project we developed instead a complementary and novel approach based on a somewhat brute...believe that our approach could help considerably in expanding the reach of abstract interpretation techniques to a variety of tar- get languages , as...project. We worked on developing a framework for compositional verification that capitalizes on the fact that data-flow languages , such as Lustre, have

  9. Communication-Driven Codesign for Multiprocessor Systems

    DTIC Science & Technology

    2004-01-01

    processors, FPGA or ASIC subsystems, mi- croprocessors, and microcontrollers. When a processor is embedded within a SLOT architecture, one or more...Broderson, Low-power CMOS digital design, IEEE Journal of Solid-State Circuits 27 (1992), no. 4, 473–484. [25] L. Chao and E. Sha , Scheduling data-flow...1997), 239– 256 . [82] P. K. Murthy, E. G. Cohen, and S. Rowland, System Canvas: A new design en- vironment for embedded DSP and telecommunications

  10. Design of Arithmetic Circuits for Complex Binary Number System

    NASA Astrophysics Data System (ADS)

    Jamil, Tariq

    2011-08-01

    Complex numbers play important role in various engineering applications. To represent these numbers efficiently for storage and manipulation, a (-1+j)-base complex binary number system (CBNS) has been proposed in the literature. In this paper, designs of nibble-size arithmetic circuits (adder, subtractor, multiplier, divider) have been presented. These circuits can be incorporated within von Neumann and associative dataflow processors to achieve higher performance in both sequential and parallel computing paradigms.

  11. Topological Patterns for Scalable Representation and Analysis of Dataflow Graphs

    DTIC Science & Technology

    2011-11-01

    dimensional mesh structure. Such a structure is of particular use to model DSP architectures in which data flows across a network of processing elements...ACSSC.1998.751616 3. Andrews, J.G., Ghosh, A., Muhamed, R.: Fundamentals of WiMAX: understanding broad- band wireless networking . Prentice Hall (2007... SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 23 19a. NAME OF RESPONSIBLE PERSON a. REPORT

  12. A strategy for automatically generating programs in the lucid programming language

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1987-01-01

    A strategy for automatically generating and verifying simple computer programs is described. The programs are specified by a precondition and a postcondition in predicate calculus. The programs generated are in the Lucid programming language, a high-level, data-flow language known for its attractive mathematical properties and ease of program verification. The Lucid programming is described, and the automatic program generation strategy is described and applied to several example problems.

  13. Addressing Modeling Challenges in Cyber-Physical Systems

    DTIC Science & Technology

    2011-03-04

    A. Lee and Eleftherios Matsikoudis. The semantics of dataflow with firing. In Grard Huet, Gordon Plotkin, Jean - Jacques Lévy, and Yves Bertot...Computer-Aided Design of Integrated Circuits and Systems, 20(3), 2001. [12] Luca P. Carloni, Roberto Passerone, Alessandro Pinto , and Alberto Sangiovanni...gst/fullpage.html?res= 9504EFDA1738F933A2575AC0A9679C8B63. 20 [15] Abhijit Davare, Douglas Densmore, Trevor Meyerowitz, Alessandro Pinto , Alberto

  14. Mapping Parameterized Dataflow Graphs onto FPGA Platforms (Preprint)

    DTIC Science & Technology

    2014-02-01

    Shen , Nimish Sane, William Plishker, Shuvra S. Bhattacharyya (University of Maryland) Hojin Kee (National Instruments) 5d. PROJECT NUMBER T2MC 5e...Rodyushkin, A. Ku - ranov, and V. Eruhimov. Computer vision workload analysis: Case study of video surveillance systems. Intel Technology Journal, 9, 2005...Prototyping, pages 1–7, Fairfax, Virginia, June 2010. [56] H. Wu, C. Shen , S. S. Bhattacharyya, K. Compton, M. Schulte, M. Wolf, and T. Zhang. Design and

  15. A framework for real-time distributed expert systems: On-orbit spacecraft fault diagnosis, monitoring and control

    NASA Technical Reports Server (NTRS)

    Mullikin, Richard L.

    1987-01-01

    Control of on-orbit operation of a spacecraft requires retention and application of special purpose, often unique, knowledge of equipment and procedures. Real-time distributed expert systems (RTDES) permit a modular approach to a complex application such as on-orbit spacecraft support. One aspect of a human-machine system that lends itself to the application of RTDES is the function of satellite/mission controllers - the next logical step toward the creation of truly autonomous spacecraft systems. This system application is described.

  16. Remote observatory access via the Advanced Communications Technology Satellite

    NASA Technical Reports Server (NTRS)

    Horan, Stephen; Anderson, Kurt; Georghiou, Georghios

    1992-01-01

    An investigation of the potential for using the ACTS to provide the data distribution network for a distributed set of users of an astronomical observatory has been conducted. The investigation consisted of gathering the data and interface standards for the ACTS network and the observatory instrumentation and telecommunications devices. A simulation based on COMNET was then developed to test data transport configurations for real-time suitability. The investigation showed that the ACTS network should support the real-time requirements and allow for growth in the observatory needs for data transport.

  17. Prototype space station automation system delivered and demonstrated at NASA

    NASA Technical Reports Server (NTRS)

    Block, Roger F.

    1987-01-01

    The Automated Subsystem Control for Life Support System (ASCLSS) program has successfully developed and demonstrated a generic approach to the automation and control of Space Station subsystems. The hierarchical and distributed real time controls system places the required controls authority at every level of the automation system architecture. As a demonstration of the automation technique, the ASCLSS system automated the Air Revitalization Group (ARG) of the Space Station regenerative Environmental Control and Life Support System (ECLSS) using real-time, high fidelity simulators of the ARG processess. This automation system represents an early flight prototype and an important test bed for evaluating Space Station controls technology including future application of ADA software in real-time control and the development and demonstration of embedded artificial intelligence and expert systems (AI/ES) in distributed automation and controls systems.

  18. Distributed On-line Monitoring System Based on Modem and Public Phone Net

    NASA Astrophysics Data System (ADS)

    Chen, Dandan; Zhang, Qiushi; Li, Guiru

    In order to solve the monitoring problem of urban sewage disposal, a distributed on-line monitoring system is proposed. By introducing dial-up communication technology based on Modem, the serial communication program can rationally solve the information transmission problem between master station and slave station. The realization of serial communication program is based on the MSComm control of C++ Builder 6.0.The software includes real-time data operation part and history data handling part, which using Microsoft SQL Server 2000 for database, and C++ Builder6.0 for user interface. The monitoring center displays a user interface with alarm information of over-standard data and real-time curve. Practical application shows that the system has successfully accomplished the real-time data acquisition from data gather station, and stored them in the terminal database.

  19. ADA and multi-microprocessor real-time simulation

    NASA Technical Reports Server (NTRS)

    Feyock, S.; Collins, W. R.

    1983-01-01

    The selection of a high-order programming language for a real-time distributed network simulation is described. The additional problem of implementing a language on a possibly changing network is addressed. The recently designed language ADA (trademarked by DoD) was chosen since it provides the best model of the underlying application to be simulated.

  20. Hardware design and implementation of fast DOA estimation method based on multicore DSP

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-10-01

    In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.

  1. Diffusive real-time dynamics of a particle with Berry curvature

    NASA Astrophysics Data System (ADS)

    Misaki, Kou; Miyashita, Seiji; Nagaosa, Naoto

    2018-02-01

    We study theoretically the influence of Berry phase on the real-time dynamics of the single particle focusing on the diffusive dynamics, i.e., the time dependence of the distribution function. Our model can be applied to the real-time dynamics of intraband relaxation and diffusion of optically excited excitons, trions, or particle-hole pair. We found that the dynamics at the early stage is deeply influenced by the Berry curvature in real space (B ), momentum space (Ω ), and also the crossed space between these two (C ). For example, it is found that Ω induces the rotation of the wave packet and causes the time dependence of the mean square displacement of the particle to be linear in time t at the initial stage; it is qualitatively different from the t3 dependence in the absence of the Berry curvature. It is also found that Ω and C modify the characteristic time scale of the thermal equilibration of momentum distribution. Moreover, the dynamics under various combinations of B ,Ω , and C shows singular behaviors such as the critical slowing down or speeding up of the momentum equilibration and the reversals of the direction of rotations. The relevance of our model for time-resolved experiments in transition metal dichalcogenides is also discussed.

  2. Evaluation of Uranium-235 Measurement Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaspar, Tiffany C.; Lavender, Curt A.; Dibert, Mark W.

    2017-05-23

    Monolithic U-Mo fuel plates are rolled to final fuel element form from the original cast ingot, and thus any inhomogeneities in 235U distribution present in the cast ingot are maintained, and potentially exaggerated, in the final fuel foil. The tolerance for inhomogeneities in the 235U concentration in the final fuel element foil is very low. A near-real-time, nondestructive technique to evaluate the 235U distribution in the cast ingot is required in order to provide feedback to the casting process. Based on the technical analysis herein, gamma spectroscopy has been recommended to provide a near-real-time measure of the 235U distribution inmore » U-Mo cast plates.« less

  3. Multimission Telemetry Visualization (MTV) system: A mission applications project from JPL's Multimedia Communications Laboratory

    NASA Technical Reports Server (NTRS)

    Koeberlein, Ernest, III; Pender, Shaw Exum

    1994-01-01

    This paper describes the Multimission Telemetry Visualization (MTV) data acquisition/distribution system. MTV was developed by JPL's Multimedia Communications Laboratory (MCL) and designed to process and display digital, real-time, science and engineering data from JPL's Mission Control Center. The MTV system can be accessed using UNIX workstations and PC's over common datacom and telecom networks from worldwide locations. It is designed to lower data distribution costs while increasing data analysis functionality by integrating low-cost, off-the-shelf desktop hardware and software. MTV is expected to significantly lower the cost of real-time data display, processing, distribution, and allow for greater spacecraft safety and mission data access.

  4. Network protocols for real-time applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1987-01-01

    The Fiber Distributed Data Interface (FDDI) and the SAE AE-9B High Speed Ring Bus (HSRB) are emerging standards for high-performance token ring local area networks. FDDI was designed to be a general-purpose high-performance network. HSRB was designed specifically for military real-time applications. A workshop was conducted at NASA Ames Research Center in January, 1987 to compare and contrast these protocols with respect to their ability to support real-time applications. This report summarizes workshop presentations and includes an independent comparison of the two protocols. A conclusion reached at the workshop was that current protocols for the upper layers of the Open Systems Interconnection (OSI) network model are inadequate for real-time applications.

  5. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  6. Dynamic Singularity Spectrum Distribution of Sea Clutter

    NASA Astrophysics Data System (ADS)

    Xiong, Gang; Yu, Wenxian; Zhang, Shuning

    2015-12-01

    The fractal and multifractal theory have provided new approaches for radar signal processing and target-detecting under the background of ocean. However, the related research mainly focuses on fractal dimension or multifractal spectrum (MFS) of sea clutter. In this paper, a new dynamic singularity analysis method of sea clutter using MFS distribution is developed, based on moving detrending analysis (DMA-MFSD). Theoretically, we introduce the time information by using cyclic auto-correlation of sea clutter. For transient correlation series, the instantaneous singularity spectrum based on multifractal detrending moving analysis (MF-DMA) algorithm is calculated, and the dynamic singularity spectrum distribution of sea clutter is acquired. In addition, we analyze the time-varying singularity exponent ranges and maximum position function in DMA-MFSD of sea clutter. For the real sea clutter data, we analyze the dynamic singularity spectrum distribution of real sea clutter in level III sea state, and conclude that the radar sea clutter has the non-stationary and time-varying scale characteristic and represents the time-varying singularity spectrum distribution based on the proposed DMA-MFSD method. The DMA-MFSD will also provide reference for nonlinear dynamics and multifractal signal processing.

  7. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  8. EPICS as a MARTe Configuration Environment

    NASA Astrophysics Data System (ADS)

    Valcarcel, Daniel F.; Barbalace, Antonio; Neto, André; Duarte, André S.; Alves, Diogo; Carvalho, Bernardo B.; Carvalho, Pedro J.; Sousa, Jorge; Fernandes, Horácio; Goncalves, Bruno; Sartori, Filippo; Manduchi, Gabriele

    2011-08-01

    The Multithreaded Application Real-Time executor (MARTe) software provides an environment for the hard real-time execution of codes while leveraging a standardized algorithm development process. The Experimental Physics and Industrial Control System (EPICS) software allows the deployment and remote monitoring of networked control systems. Channel Access (CA) is the protocol that enables the communication between EPICS distributed components. It allows to set and monitor process variables across the network belonging to different systems. The COntrol and Data Acquisition and Communication (CODAC) system for the ITER Tokamak will be EPICS based and will be used to monitor and live configure the plant controllers. The reconfiguration capability in a hard real-time system requires strict latencies from the request to the actuation and it is a key element in the design of the distributed control algorithm. Presently, MARTe and its objects are configured using a well-defined structured language. After each configuration, all objects are destroyed and the system rebuilt, following the strong hard real-time rule that a real-time system in online mode must behave in a strictly deterministic fashion. This paper presents the design and considerations to use MARTe as a plant controller and enable it to be EPICS monitorable and configurable without disturbing the execution at any time, in particular during a plasma discharge. The solutions designed for this will be presented and discussed.

  9. Locational Marginal Pricing in the Campus Power System at the Power Distribution Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Jun; Gu, Yi; Zhang, Yingchen

    2016-11-14

    In the development of smart grid at distribution level, the realization of real-time nodal pricing is one of the key challenges. The research work in this paper implements and studies the methodology of locational marginal pricing at distribution level based on a real-world distribution power system. The pricing mechanism utilizes optimal power flow to calculate the corresponding distributional nodal prices. Both Direct Current Optimal Power Flow and Alternate Current Optimal Power Flow are utilized to calculate and analyze the nodal prices. The University of Denver campus power grid is used as the power distribution system test bed to demonstrate themore » pricing methodology.« less

  10. Improved-resolution real-time skin-dose mapping for interventional fluoroscopic procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay K.; Rudin, Stephen; Bednarek, Daniel R.

    2014-03-01

    We have developed a dose-tracking system (DTS) that provides a real-time display of the skin-dose distribution on a 3D patient graphic during fluoroscopic procedures. Radiation dose to individual points on the skin is calculated using exposure and geometry parameters from the digital bus on a Toshiba C-arm unit. To accurately define the distribution of dose, it is necessary to use a high-resolution patient graphic consisting of a large number of elements. In the original DTS version, the patient graphics were obtained from a library of population body scans which consisted of larger-sized triangular elements resulting in poor congruence between the graphic points and the x-ray beam boundary. To improve the resolution without impacting real-time performance, the number of calculations must be reduced and so we created software-designed human models and modified the DTS to read the graphic as a list of vertices of the triangular elements such that common vertices of adjacent triangles are listed once. Dose is calculated for each vertex point once instead of the number of times that a given vertex appears in multiple triangles. By reformatting the graphic file, we were able to subdivide the triangular elements by a factor of 64 times with an increase in the file size of only 1.3 times. This allows a much greater number of smaller triangular elements and improves resolution of the patient graphic without compromising the real-time performance of the DTS and also gives a smoother graphic display for better visualization of the dose distribution.

  11. Practical performance of real-time shot-noise measurement in continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Huang, Peng; Zhou, Yingming; Liu, Weiqi; Zeng, Guihua

    2018-01-01

    In a practical continuous-variable quantum key distribution (CVQKD) system, real-time shot-noise measurement (RTSNM) is an essential procedure for preventing the eavesdropper exploiting the practical security loopholes. However, the performance of this procedure itself is not analyzed under the real-world condition. Therefore, we indicate the RTSNM practical performance and investigate its effects on the CVQKD system. In particular, due to the finite-size effect, the shot-noise measurement at the receiver's side may decrease the precision of parameter estimation and consequently result in a tight security bound. To mitigate that, we optimize the block size for RTSNM under the ensemble size limitation to maximize the secure key rate. Moreover, the effect of finite dynamics of amplitude modulator in this scheme is studied and its mitigation method is also proposed. Our work indicates the practical performance of RTSNM and provides the real secret key rate under it.

  12. The Carnegie Mellon University Insert Project

    DTIC Science & Technology

    1997-02-01

    Real - Time Systems (INSERT) project under the DARPA Evolutionary Design for Complex Software (EDCS) Program. The INSERT team has completed an initial API definition and ported the existing real-time publication subscription group communication software to LynxOS 2.4, a POSIX.1b compliant OS. The distributed real-time publisher/subscriber communication model is now supported by a processor membership protocol which allows a node in the system to fail, or to rejoin the system later. When a node fails, all the publishers and subscribers on that node have to be

  13. Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Nelson, Austin; Prabakar, Kumaraguru

    As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time systems and testing PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a Monte Carlo method that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-time digitalmore » testing platform. Smart PV inverters were added to the real-time model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the choice feeders could be analyzed.« less

  14. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  15. Real - time Optimization of Distributed Energy Storage System Operation Strategy Based on Peak Load Shifting

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Lu, Guangqi; Li, Xiaoyu; Zhang, Yichi; Yun, Zejian; Bian, Di

    2018-01-01

    To take advantage of the energy storage system (ESS) sufficiently, the factors that the service life of the distributed energy storage system (DESS) and the load should be considered when establishing optimization model. To reduce the complexity of the load shifting of DESS in the solution procedure, the loss coefficient and the equal capacity ratio distribution principle were adopted in this paper. Firstly, the model was established considering the constraint conditions of the cycles, depth, power of the charge-discharge of the ESS, the typical daily load curves, as well. Then, dynamic programming method was used to real-time solve the model in which the difference of power Δs, the real-time revised energy storage capacity Sk and the permission error of depth of charge-discharge were introduced to optimize the solution process. The simulation results show that the optimized results was achieved when the load shifting in the load variance was not considered which means the charge-discharge of the energy storage system was not executed. In the meantime, the service life of the ESS would increase.

  16. Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant.

    PubMed

    Moreno-Garcia, Isabel M; Palacios-Garcia, Emilio J; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J; Varo-Martinez, Marta; Real-Calvo, Rafael J

    2016-05-26

    There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.

  17. Infusion-line pressure as a real-time monitor of convection-enhanced delivery in pre-clinical models.

    PubMed

    Lam, Miu Fei; Foo, Stacy W L; Thomas, Meghan G; Lind, Christopher R P

    2014-01-15

    Acute convection-enhanced delivery (CED) is a neurosurgical delivery technique that allows for precise and uniform distribution of an infusate to a brain structure. It remains experimental due to difficulties in ensuring successful delivery. Real-time monitoring is able to provide immediate feedback on cannula placement, infusate distribution, and if the infusion is proceeding as planned or is failing due to reflux or catheter obstruction. Pressure gradient is the driving force behind CED, with the infusion pressure being directly proportional to the flow-rate. The aim of this study was to assess the feasibility of using infusion-line pressure profiling to distinguish in real-time between succeeding and failing CED infusions. To do so we delivered cresyl violet dye at 0.5, 1.0 and 2.0 μl/min via CED in vitro using 0.6% agarose gel and in vivo to the rat striatum. Infusions that failed in agarose gel models could only be differentiated late during the procedures. In the rat in vivo model, the infusion-line profiles of obstructed infusions were not distinctive from those of successful infusions. Intraoperative magnetic resonance imaging (MRI) is used for real-time visualisation of cannula placement and infusate distribution. Particularly for animal pre-clinical work, it would be advantageous to supplement MRI with a cheap, accessible technique to monitor infusions and provide a real-time measure of infusion success or failure. Infusion-line pressure monitoring was of limited value in identifying successful CED with small volume infusions, whilst its utility for large volume infusion remains unknown. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  18. Power Hardware-in-the-Loop Testing of a Smart Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendoza Carrillo, Ismael; Breaden, Craig; Medley, Paige

    This paper presents the results of the third and final phase of the National Renewable Energy Lab (NREL) INTEGRATE demonstration: Smart Distribution. For this demonstration, high penetrations of solar PV and wind energy systems were simulated in a power hardware-in-the-loop set-up using a smart distribution test feeder. Simulated and real DERs were controlled by a real-time control platform, which manages grid constraints under high clean energy deployment levels. The power HIL testing, conducted at NREL's ESIF smart power lab, demonstrated how dynamically managing DER increases the grid's hosting capacity by leveraging active network management's (ANM) safe and reliable control framework.more » Results are presented for how ANM's real-time monitoring, automation, and control can be used to manage multiple DERs and multiple constraints associated with high penetrations of DER on a distribution grid. The project also successfully demonstrated the importance of escalating control actions given how ANM enables operation of grid equipment closer to their actual physical limit in the presence of very high levels of intermittent DER.« less

  19. From MetroII to Metronomy, Designing Contract-based Function-Architecture Co-simulation Framework for Timing Verification of Cyber-Physical Systems

    DTIC Science & Technology

    2015-03-13

    A. Lee. “A Programming Model for Time - Synchronized Distributed Real- Time Systems”. In: Proceedings of Real Time and Em- bedded Technology and Applications Symposium. 2007, pp. 259–268. ...From MetroII to Metronomy, Designing Contract-based Function-Architecture Co-simulation Framework for Timing Verification of Cyber-Physical Systems...the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data

  20. Applying Dataflow Architecture and Visualization Tools to In Vitro Pharmacology Data Automation.

    PubMed

    Pechter, David; Xu, Serena; Kurtz, Marc; Williams, Steven; Sonatore, Lisa; Villafania, Artjohn; Agrawal, Sony

    2016-12-01

    The pace and complexity of modern drug discovery places ever-increasing demands on scientists for data analysis and interpretation. Data flow programming and modern visualization tools address these demands directly. Three different requirements-one for allosteric modulator analysis, one for a specialized clotting analysis, and one for enzyme global progress curve analysis-are reviewed, and their execution in a combined data flow/visualization environment is outlined. © 2016 Society for Laboratory Automation and Screening.

  1. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  2. Time scale defined by the fractal structure of the price fluctuations in foreign exchange markets

    NASA Astrophysics Data System (ADS)

    Kumagai, Yoshiaki

    2010-04-01

    In this contribution, a new time scale named C-fluctuation time is defined by price fluctuations observed at a given resolution. The intraday fractal structures and the relations of the three time scales: real time (physical time), tick time and C-fluctuation time, in foreign exchange markets are analyzed. The data set used is trading prices of foreign exchange rates; US dollar (USD)/Japanese yen (JPY), USD/Euro (EUR), and EUR/JPY. The accuracy of the data is one minute and data within a minute are recorded in order of transaction. The series of instantaneous velocity of C-fluctuation time flowing are exponentially distributed for small C when they are measured by real time and for tiny C when they are measured by tick time. When the market is volatile, for larger C, the series of instantaneous velocity are exponentially distributed.

  3. From Provenance Standards and Tools to Queries and Actionable Provenance

    NASA Astrophysics Data System (ADS)

    Ludaescher, B.

    2017-12-01

    The W3C PROV standard provides a minimal core for sharing retrospective provenance information for scientific workflows and scripts. PROV extensions such as DataONE's ProvONE model are necessary for linking runtime observables in retrospective provenance records with conceptual-level prospective provenance information, i.e., workflow (or dataflow) graphs. Runtime provenance recorders, such as DataONE's RunManager for R, or noWorkflow for Python capture retrospective provenance automatically. YesWorkflow (YW) is a toolkit that allows researchers to declare high-level prospective provenance models of scripts via simple inline comments (YW-annotations), revealing the computational modules and dataflow dependencies in the script. By combining and linking both forms of provenance, important queries and use cases can be supported that neither provenance model can afford on its own. We present existing and emerging provenance tools developed for the DataONE and SKOPE (Synthesizing Knowledge of Past Environments) projects. We show how the different tools can be used individually and in combination to model, capture, share, query, and visualize provenance information. We also present challenges and opportunities for making provenance information more immediately actionable for the researchers who create it in the first place. We argue that such a shift towards "provenance-for-self" is necessary to accelerate the creation, sharing, and use of provenance in support of transparent, reproducible computational and data science.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, W.

    Building something which could be called {open_quotes}virtual reality{close_quotes} (VR) is something of a challenge, particularly when nobody really seems to agree on a definition of VR. The author wanted to combine scientific visualization with VR, resulting in an environment useful for assisting scientific research. He demonstrates the combination of VR and scientific visualization in a prototype application. The VR application constructed consists of a dataflow based system for performing scientific visualization (AVS), extensions to the system to support VR input devices and a numerical simulation ported into the dataflow environment. The VR system includes two inexpensive, off-the-shelf VR devices andmore » some custom code. A working system was assembled with about two man-months of effort. The system allows the user to specify parameters for a chemical flooding simulation as well as some viewing parameters using VR input devices, as well as view the output using VR output devices. In chemical flooding, there is a subsurface region that contains chemicals which are to be removed. Secondary oil recovery and environmental remediation are typical applications of chemical flooding. The process assumes one or more injection wells, and one or more production wells. Chemicals or water are pumped into the ground, mobilizing and displacing hydrocarbons or contaminants. The placement of the production and injection wells, and other parameters of the wells, are the most important variables in the simulation.« less

  5. Real-time monitoring of single-photon detectors against eavesdropping in quantum key distribution systems.

    PubMed

    da Silva, Thiago Ferreira; Xavier, Guilherme B; Temporão, Guilherme P; von der Weid, Jean Pierre

    2012-08-13

    By employing real-time monitoring of single-photon avalanche photodiodes we demonstrate how two types of practical eavesdropping strategies, the after-gate and time-shift attacks, may be detected. Both attacks are identified with the detectors operating without any special modifications, making this proposal well suited for real-world applications. The monitoring system is based on accumulating statistics of the times between consecutive detection events, and extracting the afterpulse and overall efficiency of the detectors in real-time using mathematical models fit to the measured data. We are able to directly observe changes in the afterpulse probabilities generated from the after-gate and faint after-gate attacks, as well as different timing signatures in the time-shift attack. We also discuss the applicability of our scheme to other general blinding attacks.

  6. The Explosive Universe with Gaia

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, Łukasz; Hodgkin, Simon T.; Blagorodnova, Nadejda; Belokurov, Vasily

    2014-01-01

    The Gaia mission will observe the entire sky for 5 years providing ultra-precise astrometric, photometric and spectroscopic measurements for a billion stars in the Galaxy. Hence, naturally, Gaia becomes an all-sky multi-epoch photometric survey, which will monitor and detect variability with millimag precision as well as new transient sources such as supernovae, novae, microlensing events, tidal disruption events, asteroids, among others. Gaia data-flow allows for quick detections of anomalies within 24-48h after the observation. Such near-real-time survey will be able to detect about 6000 supernovae brighter than 19 mag up to redshifts of Z 0.15. The on-board low-resolution (R 100) spectrograph will allow for early and robust classification of transients and minimise the false-alert rate, even providing the estimates on redshift for supernovae. Gaia will also offer a unique possibility for detecting astrometric shifts in microlensing events, which, combined with Gaia's and ground-based photometry, will provide unique mass measurements of lenses, constrains on the dark matter content in the Milky Way and possible detections of free floating black holes. Alerts from Gaia will be publicly available soon after the detection is verified and tested. First alerts are expected early in 2014 and those will be used for ground-based verification. All facilities are invited to join the verification and the follow-up effort. Alerts will be published on a web page, via Skyalert.org and via emailing list. Each alert will contain coordinates, Gaia light curve and low-resolution spectra, classification and cross-matching results. More information on the Gaia Science Alerts can be found here: http://www.ast.cam.ac.uk/ioa/wikis/gsawgwiki/ The full version of the poster is available here: http://www.ast.cam.ac.uk/ioa/wikis/gsawgwiki/images/1/13/GaiaAlertsPosterIAUS298.pdf

  7. A Cloud Architecture for Teleradiology-as-a-Service.

    PubMed

    Melício Monteiro, Eriksson J; Costa, Carlos; Oliveira, José L

    2016-05-17

    Telemedicine has been promoted by healthcare professionals as an efficient way to obtain remote assistance from specialised centres, to get a second opinion about complex diagnosis or even to share knowledge among practitioners. The current economic restrictions in many countries are increasing the demand for these solutions even more, in order to optimize processes and reduce costs. However, despite some technological solutions already in place, their adoption has been hindered by the lack of usability, especially in the set-up process. In this article we propose a telemedicine platform that relies on a cloud computing infrastructure and social media principles to simplify the creation of dynamic user-based groups, opening up opportunities for the establishment of teleradiology trust domains. The collaborative platform is provided as a Software-as-a-Service solution, supporting real time and asynchronous collaboration between users. To evaluate the solution, we have deployed the platform in a private cloud infrastructure. The system is made up of three main components - the collaborative framework, the Medical Management Information System (MMIS) and the HTML5 (Hyper Text Markup Language) Web client application - connected by a message-oriented middleware. The solution allows physicians to create easily dynamic network groups for synchronous or asynchronous cooperation. The network created improves dataflow between colleagues and also knowledge sharing and cooperation through social media tools. The platform was implemented and it has already been used in two distinct scenarios: teaching of radiology and tele-reporting. Collaborative systems can simplify the establishment of telemedicine expert groups with tools that enable physicians to improve their clinical practice. Streamlining the usage of this kind of systems through the adoption of Web technologies that are common in social media will increase the quality of current solutions, facilitating the sharing of clinical information, medical imaging studies and patient diagnostics among collaborators.

  8. First-principles electron dynamics control simulation of diamond under femtosecond laser pulse train irradiation.

    PubMed

    Wang, Cong; Jiang, Lan; Wang, Feng; Li, Xin; Yuan, Yanping; Xiao, Hai; Tsai, Hai-Lung; Lu, Yongfeng

    2012-07-11

    A real-time and real-space time-dependent density functional is applied to simulate the nonlinear electron-photon interactions during shaped femtosecond laser pulse train ablation of diamond. Effects of the key pulse train parameters such as the pulse separation, spatial/temporal pulse energy distribution and pulse number per train on the electron excitation and energy absorption are discussed. The calculations show that photon-electron interactions and transient localized electron dynamics can be controlled including photon absorption, electron excitation, electron density, and free electron distribution by the ultrafast laser pulse train.

  9. A Real-Time Imaging System for Stereo Atomic Microscopy at SPring-8's BL25SU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsushita, Tomohiro; Guo, Fang Zhun; Muro, Takayuki

    2007-01-19

    We have developed a real-time photoelectron angular distribution (PEAD) and Auger-electron angular distribution (AEAD) imaging system at SPring-8 BL25SU, Japan. In addition, a real-time imaging system for circular dichroism (CD) studies of PEAD/AEAD has been newly developed. Two PEAD images recorded with left- and right-circularly polarized light can be regarded as a stereo image of the atomic arrangement. A two-dimensional display type mirror analyzer (DIANA) has been installed at the beamline, making it possible to record PEAD/AEAD patterns with an acceptance angle of {+-}60 deg. in real-time. The twin-helical undulators at BL25SU enable helicity switching of the circularly polarized lightmore » at 10Hz, 1Hz or 0.1Hz. In order to realize real-time measurements of the CD of the PEAD/AEAD, the CCD camera must be synchronized to the switching frequency. The VME computer that controls the ID is connected to the measurement computer with two BNC cables, and the helicity information is sent using TTL signals. For maximum flexibility, rather than using a hardware shutter synchronizing with the TTL signal we have developed software to synchronize the CCD shutter with the TTL signal. We have succeeded in synchronizing the CCD camera in both the 1Hz and 0.1Hz modes.« less

  10. Impact of scatterometer wind (ASCAT-A/B) data assimilation on semi real-time forecast system at KIAPS

    NASA Astrophysics Data System (ADS)

    Han, H. J.; Kang, J. H.

    2016-12-01

    Since Jul. 2015, KIAPS (Korea Institute of Atmospheric Prediction Systems) has been performing the semi real-time forecast system to assess the performance of their forecast system as a NWP model. KPOP (KIAPS Protocol for Observation Processing) is a part of KIAPS data assimilation system and has been performing well in KIAPS semi real-time forecast system. In this study, due to the fact that KPOP would be able to treat the scatterometer wind data, we analyze the effect of scatterometer wind (ASCAT-A/B) on KIAPS semi real-time forecast system. O-B global distribution and statistics of scatterometer wind give use two information which are the difference between background field and observation is not too large and KPOP processed the scatterometer wind data well. The changes of analysis increment because of O-B global distribution appear remarkably at the bottom of atmospheric field. It also shows that scatterometer wind data cover wide ocean where data would be able to short. Performance of scatterometer wind data can be checked through the vertical error reduction against IFS between background and analysis field and vertical statistics of O-A. By these analysis result, we can notice that scatterometer wind data will influence the positive effect on lower level performance of semi real-time forecast system at KIAPS. After, long-term result based on effect of scatterometer wind data will be analyzed.

  11. Queueing analysis of a canonical model of real-time multiprocessors

    NASA Technical Reports Server (NTRS)

    Krishna, C. M.; Shin, K. G.

    1983-01-01

    A logical classification of multiprocessor structures from the point of view of control applications is presented. A computation of the response time distribution for a canonical model of a real time multiprocessor is presented. The multiprocessor is approximated by a blocking model. Two separate models are derived: one created from the system's point of view, and the other from the point of view of an incoming task.

  12. AEGIS: a robust and scalable real-time public health surveillance system.

    PubMed

    Reis, Ben Y; Kirby, Chaim; Hadden, Lucy E; Olson, Karen; McMurry, Andrew J; Daniel, James B; Mandl, Kenneth D

    2007-01-01

    In this report, we describe the Automated Epidemiological Geotemporal Integrated Surveillance system (AEGIS), developed for real-time population health monitoring in the state of Massachusetts. AEGIS provides public health personnel with automated near-real-time situational awareness of utilization patterns at participating healthcare institutions, supporting surveillance of bioterrorism and naturally occurring outbreaks. As real-time public health surveillance systems become integrated into regional and national surveillance initiatives, the challenges of scalability, robustness, and data security become increasingly prominent. A modular and fault tolerant design helps AEGIS achieve scalability and robustness, while a distributed storage model with local autonomy helps to minimize risk of unauthorized disclosure. The report includes a description of the evolution of the design over time in response to the challenges of a regional and national integration environment.

  13. Optical chaos and hybrid WDM/TDM based large capacity quasi-distributed sensing network with real-time fiber fault monitoring.

    PubMed

    Luo, Yiyang; Xia, Li; Xu, Zhilin; Yu, Can; Sun, Qizhen; Li, Wei; Huang, Di; Liu, Deming

    2015-02-09

    An optical chaos and hybrid wavelength division multiplexing/time division multiplexing (WDM/TDM) based large capacity quasi-distributed sensing network with real-time fiber fault monitoring is proposed. Chirped fiber Bragg grating (CFBG) intensity demodulation is adopted to improve the dynamic range of the measurements. Compared with the traditional sensing interrogation methods in time, radio frequency and optical wavelength domains, the measurand sensing and the precise locating of the proposed sensing network can be simultaneously interrogated by the relative amplitude change (RAC) and the time delay of the correlation peak in the cross-correlation spectrum. Assisted with the WDM/TDM technology, hundreds of sensing units could be potentially multiplexed in the multiple sensing fiber lines. Based on the proof-of-concept experiment for axial strain measurement with three sensing fiber lines, the strain sensitivity up to 0.14% RAC/με and the precise locating of the sensors are achieved. Significantly, real-time fiber fault monitoring in the three sensing fiber lines is also implemented with a spatial resolution of 2.8 cm.

  14. INO340 telescope control system: middleware requirements, design, and evaluation

    NASA Astrophysics Data System (ADS)

    Shalchian, Hengameh; Ravanmehr, Reza

    2016-07-01

    The INO340 Control System (INOCS) is being designed in terms of a distributed real-time architecture. The real-time (soft and firm) nature of many processes inside INOCS causes the communication paradigm between its different components to be time-critical and sensitive. For this purpose, we have chosen the Data Distribution Service (DDS) standard as the communications middleware which is itself based on the publish-subscribe paradigm. In this paper, we review and compare the main middleware types, and then we illustrate the middleware architecture of INOCS and its specific requirements. Finally, we present the experimental results, performed to evaluate our middleware in order to ensure that it meets our requirements.

  15. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    PubMed

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  16. Steam distribution and energy delivery optimization using wireless sensors

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Allgood, Glenn O.; Kuruganti, Teja P.; Sukumar, Sreenivas R.; Djouadi, Seddik M.; Lake, Joe E.

    2011-05-01

    The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  17. Research on calibration method of downhole optical fiber temperature measurement and its application in SAGD well

    NASA Astrophysics Data System (ADS)

    Lu, Zhiwei; Han, Li; Hu, Chengjun; Pan, Yong; Duan, Shengnan; Wang, Ningbo; Li, Shijian; Nuer, Maimaiti

    2017-10-01

    With the development of oil and gas fields, the accuracy and quantity requirements of real-time dynamic monitoring data needed for well dynamic analysis and regulation are increasing. Permanent, distributed downhole optical fiber temperature and pressure monitoring and other online real-time continuous data monitoring has become an important data acquisition and transmission technology in digital oil field and intelligent oil field construction. Considering the requirement of dynamic analysis of steam chamber developing state in SAGD horizontal wells in F oil reservoir in Xinjiang oilfield, it is necessary to carry out real-time and continuous temperature monitoring in horizontal section. Based on the study of the principle of optical fiber temperature measurement, the factors that cause the deviation of optical fiber temperature sensing are analyzed, and the method of fiber temperature calibration is proposed to solve the problem of temperature deviation. Field application in three wells showed that it could attain accurate measurement of downhole temperature by temperature correction. The real-time and continuous downhole distributed fiber temperature sensing technology has higher application value in the reservoir management of SAGD horizontal wells. It also has a reference for similar dynamic monitoring in reservoir production.

  18. Real time monitoring of water distribution in an operando fuel cell during transient states

    NASA Astrophysics Data System (ADS)

    Martinez, N.; Peng, Z.; Morin, A.; Porcar, L.; Gebel, G.; Lyonnard, S.

    2017-10-01

    The water distribution of an operating proton exchange membrane fuel cell (PEMFC) was monitored in real time by using Small Angle Neutron Scattering (SANS). The formation of liquid water was obtained simultaneously with the evolution of the water content inside the membrane. Measurements were performed when changing current with a time resolution of 10 s, providing insights on the kinetics of water management prior to the stationary phase. We confirmed that water distribution is strongly heterogeneous at the scale at of the whole Membrane Electrode Assembly. As already reported, at the local scale there is no straightforward link between the amounts of water present inside and outside the membrane. However, we show that the temporal evolutions of these two parameters are strongly correlated. In particular, the local membrane water content is nearly instantaneously correlated to the total liquid water content, whether it is located at the anode or cathode side. These results can help in optimizing 3D stationary diphasic models used to predict PEMFC water distribution.

  19. A high performance load balance strategy for real-time multicore systems.

    PubMed

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper.

  20. Decision support system for outage management and automated crew dispatch

    DOEpatents

    Kang, Ning; Mousavi, Mirrasoul

    2018-01-23

    A decision support system is provided for utility operations to assist with crew dispatch and restoration activities following the occurrence of a disturbance in a multiphase power distribution network, by providing a real-time visualization of possible location(s). The system covers faults that occur on fuse-protected laterals. The system uses real-time data from intelligent electronics devices coupled with other data sources such as static feeder maps to provide a complete picture of the disturbance event, guiding the utility crew to the most probable location(s). This information is provided in real-time, reducing restoration time and avoiding more costly and laborious fault location finding practices.

  1. A High Performance Load Balance Strategy for Real-Time Multicore Systems

    PubMed Central

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382

  2. Effect of Temperature Variations on Molecular Weight Distributions - Batch, Chain Addition Polymerizations

    DTIC Science & Technology

    those that might be formed by temperature variations in real reactors. Under most conditions, temperature variations appear to have a much greater effect on MWD than residence time distributions and micromixing .

  3. RTDS implementation of an improved sliding mode based inverter controller for PV system.

    PubMed

    Islam, Gazi; Muyeen, S M; Al-Durra, Ahmed; Hasanien, Hany M

    2016-05-01

    This paper proposes a novel approach for testing dynamics and control aspects of a large scale photovoltaic (PV) system in real time along with resolving design hindrances of controller parameters using Real Time Digital Simulator (RTDS). In general, the harmonic profile of a fast controller has wide distribution due to the large bandwidth of the controller. The major contribution of this paper is that the proposed control strategy gives an improved voltage harmonic profile and distribute it more around the switching frequency along with fast transient response; filter design, thus, becomes easier. The implementation of a control strategy with high bandwidth in small time steps of Real Time Digital Simulator (RTDS) is not straight forward. This paper shows a good methodology for the practitioners to implement such control scheme in RTDS. As a part of the industrial process, the controller parameters are optimized using particle swarm optimization (PSO) technique to improve the low voltage ride through (LVRT) performance under network disturbance. The response surface methodology (RSM) is well adapted to build analytical models for recovery time (Rt), maximum percentage overshoot (MPOS), settling time (Ts), and steady state error (Ess) of the voltage profile immediate after inverter under disturbance. A systematic approach of controller parameter optimization is detailed. The transient performance of the PSO based optimization method applied to the proposed sliding mode controlled PV inverter is compared with the results from genetic algorithm (GA) based optimization technique. The reported real time implementation challenges and controller optimization procedure are applicable to other control applications in the field of renewable and distributed generation systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Real-Time Multiprocessor Programming Language (RTMPL) user's manual

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.

    1985-01-01

    A real-time multiprocessor programming language (RTMPL) has been developed to provide for high-order programming of real-time simulations on systems of distributed computers. RTMPL is a structured, engineering-oriented language. The RTMPL utility supports a variety of multiprocessor configurations and types by generating assembly language programs according to user-specified targeting information. Many programming functions are assumed by the utility (e.g., data transfer and scaling) to reduce the programming chore. This manual describes RTMPL from a user's viewpoint. Source generation, applications, utility operation, and utility output are detailed. An example simulation is generated to illustrate many RTMPL features.

  5. Real-time control systems: feedback, scheduling and robustness

    NASA Astrophysics Data System (ADS)

    Simon, Daniel; Seuret, Alexandre; Sename, Olivier

    2017-08-01

    The efficient control of real-time distributed systems, where continuous components are governed through digital devices and communication networks, needs a careful examination of the constraints arising from the different involved domains inside co-design approaches. Thanks to the robustness of feedback control, both new control methodologies and slackened real-time scheduling schemes are proposed beyond the frontiers between these traditionally separated fields. A methodology to design robust aperiodic controllers is provided, where the sampling interval is considered as a control variable of the system. Promising experimental results are provided to show the feasibility and robustness of the approach.

  6. Frequency Based Real-time Pricing for Residential Prosumers

    NASA Astrophysics Data System (ADS)

    Hambridge, Sarah Mabel

    This work is the first to explore frequency based pricing for secondary frequency control as a price-reactive control mechanism for residential prosumers. A frequency based real-time electricity rate is designed as an autonomous market control mechanism for residential prosumers to provide frequency support as an ancillary service. In addition, prosumers are empowered to participate in dynamic energy transactions, therefore integrating Distributed Energy Resources (DERs), and increasing distributed energy storage onto the distributed grid. As the grid transitions towards DERs, a new market based control system will take the place of the legacy distributed system and possibly the legacy bulk power system. DERs provide many benefits such as energy independence, clean generation, efficiency, and reliability to prosumers during blackouts. However, the variable nature of renewable energy and current lack of installed energy storage on the grid will create imbalances in supply and demand as uptake increases, affecting the grid frequency and system operation. Through a frequency-based electricity rate, prosumers will be encouraged to purchase energy storage systems (ESS) to offset their neighbor's distributed generation (DG) such as solar. Chapter 1 explains the deregulation of the power system and move towards Distributed System Operators (DSOs), as prosumers become owners of microgrids and energy cells connected to the distributed system. Dynamic pricing has been proposed as a benefit to prosumers, giving them the ability to make decisions in the energy market, while also providing a way to influence and control their behavior. Frequency based real-time pricing is a type of dynamic pricing which falls between price-reactive control and transactive control. Prosumer-to-prosumer transactions may take the place of prosumer-to-utility transactions, building The Energy Internet. Frequency based pricing could be a mechanism for determining prosumer prices and supporting stability in a free, competitive, market. Frequency based pricing is applied to secondary frequency control in this work, providing support at one to five minute time intervals. In Chapter 2, a frequency based pricing curve is designed as a preliminary study and the response of the prosumer is optimized for economic dispatch. In Chapter 3, a day-ahead schedule and real-time adjustment energy management framework is presented for the prosumer, creating a market structure similar to the existing energy market supervised by Independent System Operators (ISOs). Enabling technology, such as the solid state transformer (SST) is described for prosumer energy transactions, controlling power flow from the prosumer's energy cell to the grid or neighboring prosumer as an energy router. Experimental results are shown to demonstrate this capability. Additionally, the SST is capable of measuring the grid frequency. Lastly, a frequency based real-time hybrid electricity rate is presented in Chapter 4 and Chapter 5. Chapter 4 specializes in a single direction rate while Chapter 5 presents a bi-directional rate. A Time-of-use (TOU) rate is combined with the real-time frequency based price to lower energy bills for a residential prosumer with ESS, in agreement with the proposed day-ahead and real-time energy management framework. The cost to the ESS is also considered in this section. Linear programming and strategic rule based methods are utilized to find the lowest energy bill. As a result, prosumers can use ESS to balance the grid, reducing their bill as much per kWh as PV or DG under a TOU net-metering price scheme, while providing distributed frequency support to the grid authority. The variability of the frequency based rate is similar to variability in the stock market, which gives a sense of how prosumers will interact with variable prices in a system supported by The Energy Internet.

  7. Real-time distributed scheduling algorithm for supporting QoS over WDM networks

    NASA Astrophysics Data System (ADS)

    Kam, Anthony C.; Siu, Kai-Yeung

    1998-10-01

    Most existing or proposed WDM networks employ circuit switching, typically with one session having exclusive use of one entire wavelength. Consequently they are not suitable for data applications involving bursty traffic patterns. The MIT AON Consortium has developed an all-optical LAN/MAN testbed which provides time-slotted WDM service and employs fast-tunable transceivers in each optical terminal. In this paper, we explore extensions of this service to achieve fine-grained statistical multiplexing with different virtual circuits time-sharing the wavelengths in a fair manner. In particular, we develop a real-time distributed protocol for best-effort traffic over this time-slotted WDM service with near-optical fairness and throughput characteristics. As an additional design feature, our protocol supports the allocation of guaranteed bandwidths to selected connections. This feature acts as a first step towards supporting integrated services and quality-of-service guarantees over WDM networks. To achieve high throughput, our approach is based on scheduling transmissions, as opposed to collision- based schemes. Our distributed protocol involves one MAN scheduler and several LAN schedulers (one per LAN) in a master-slave arrangement. Because of propagation delays and limits on control channel capacities, all schedulers are designed to work with partial, delayed traffic information. Our distributed protocol is of the `greedy' type to ensure fast execution in real-time in response to dynamic traffic changes. It employs a hybrid form of rate and credit control for resource allocation. We have performed extensive simulations, which show that our protocol allocates resources (transmitters, receivers, wavelengths) fairly with high throughput, and supports bandwidth guarantees.

  8. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  9. Development of visual programming techniques to integrate theoretical modeling into the scientific planning and instrument operations environment of ISTP

    NASA Technical Reports Server (NTRS)

    Goodrich, Charles C.

    1993-01-01

    The goal of this project is to investigate the use of visualization software based on the visual programming and data-flow paradigms to meet the needs of the SPOF and through it the International Solar Terrestrial Physics (ISTP) science community. Specific needs we address include science planning, data interpretation, comparisons of data with simulation and model results, and data acquisition. Our accomplishments during the twelve month grant period are discussed below.

  10. Macro-actor execution on multilevel data-driven architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Najjar, W.

    1988-12-31

    The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.

  11. Deterministic Execution of Ptides Programs

    DTIC Science & Technology

    2013-05-15

    at a time no later than 30+1+5 = 36. Assume the maximum clock synchronization error is . Therefore, the AddSubtract adder must delay processing the...the synchronization of the platform real- time clock to its peers in other system platforms. The portions of PtidyOS code that implement access to the...interesting opportunities for future research. References [1] Y. Zhao, E. A. Lee, and J. Liu, “A programming model for time - synchronized distributed real

  12. FAWKES Information Management for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Spetka, S.; Ramseyer, G.; Tucker, S.

    2010-09-01

    Current space situational awareness assets can be fully utilized by managing their inputs and outputs in real time. Ideally, sensors are tasked to perform specific functions to maximize their effectiveness. Many sensors are capable of collecting more data than is needed for a particular purpose, leading to the potential to enhance a sensor’s utilization by allowing it to be re-tasked in real time when it is determined that sufficient data has been acquired to meet the first task’s requirements. In addition, understanding a situation involving fast-traveling objects in space may require inputs from more than one sensor, leading to a need for information sharing in real time. Observations that are not processed in real time may be archived to support forensic analysis for accidents and for long-term studies. Space Situational Awareness (SSA) requires an extremely robust distributed software platform to appropriately manage the collection and distribution for both real-time decision-making as well as for analysis. FAWKES is being developed as a Joint Space Operations Center (JSPOC) Mission System (JMS) compliant implementation of the AFRL Phoenix information management architecture. It implements a pub/sub/archive/query (PSAQ) approach to communications designed for high performance applications. FAWKES provides an easy to use, reliable interface for structuring parallel processing, and is particularly well suited to the requirements of SSA. In addition to supporting point-to-point communications, it offers an elegant and robust implementation of collective communications, to scatter, gather and reduce values. A query capability is also supported that enhances reliability. Archived messages can be queried to re-create a computation or to selectively retrieve previous publications. PSAQ processes express their role in a computation by subscribing to their inputs and by publishing their results. Sensors on the edge can subscribe to inputs by appropriately authorized users, allowing dynamic tasking capabilities. Previously, the publication of sensor data collected by mobile systems was demonstrated. Thumbnails of infrared imagery that were imaged in real time by an aircraft [1] were published over a grid. This airborne system subscribed to requests for and then published the requested detailed images. In another experiment a system employing video subscriptions [2] drove the analysis of live video streams, resulting in a published stream of processed video output. We are currently implementing an SSA system that uses FAWKES to deliver imagery from telescopes through a pipeline of processing steps that are performed on high performance computers. PSAQ facilitates the decomposition of a problem into components that can be distributed across processing assets from the smallest sensors in space to the largest high performance computing (HPC) centers, as well as the integration and distribution of the results, all in real time. FAWKES supports the real-time latency requirements demanded by all of these applications. It also enhances reliability by easily supporting redundant computation. This study shows how FAWKES/PSAQ is utilized in SSA applications, and presents performance results for latency and throughput that meet these needs.

  13. A Proton Beam Therapy System Dedicated to Spot-Scanning Increases Accuracy with Moving Tumors by Real-Time Imaging and Gating and Reduces Equipment Size

    PubMed Central

    Shimizu, Shinichi; Miyamoto, Naoki; Matsuura, Taeko; Fujii, Yusuke; Umezawa, Masumi; Umegaki, Kikuo; Hiramoto, Kazuo; Shirato, Hiroki

    2014-01-01

    Purpose A proton beam therapy (PBT) system has been designed which dedicates to spot-scanning and has a gating function employing the fluoroscopy-based real-time-imaging of internal fiducial markers near tumors. The dose distribution and treatment time of the newly designed real-time-image gated, spot-scanning proton beam therapy (RGPT) were compared with free-breathing spot-scanning proton beam therapy (FBPT) in a simulation. Materials and Methods In-house simulation tools and treatment planning system VQA (Hitachi, Ltd., Japan) were used for estimating the dose distribution and treatment time. Simulations were performed for 48 motion parameters (including 8 respiratory patterns and 6 initial breathing timings) on CT data from two patients, A and B, with hepatocellular carcinoma and with clinical target volumes 14.6 cc and 63.1 cc. The respiratory patterns were derived from the actual trajectory of internal fiducial markers taken in X-ray real-time tumor-tracking radiotherapy (RTRT). Results With FBPT, 9/48 motion parameters achieved the criteria of successful delivery for patient A and 0/48 for B. With RGPT 48/48 and 42/48 achieved the criteria. Compared with FBPT, the mean liver dose was smaller with RGPT with statistical significance (p<0.001); it decreased from 27% to 13% and 28% to 23% of the prescribed doses for patients A and B, respectively. The relative lengthening of treatment time to administer 3 Gy (RBE) was estimated to be 1.22 (RGPT/FBPT: 138 s/113 s) and 1.72 (207 s/120 s) for patients A and B, respectively. Conclusions This simulation study demonstrated that the RGPT was able to improve the dose distribution markedly for moving tumors without very large treatment time extension. The proton beam therapy system dedicated to spot-scanning with a gating function for real-time imaging increases accuracy with moving tumors and reduces the physical size, and subsequently the cost of the equipment as well as of the building housing the equipment. PMID:24747601

  14. Tight real-time synchronization of a microwave clock to an optical clock across a turbulent air path

    PubMed Central

    Bergeron, Hugo; Sinclair, Laura C.; Swann, William C.; Nelson, Craig W.; Deschênes, Jean-Daniel; Baumann, Esther; Giorgetta, Fabrizio R.; Coddington, Ian; Newbury, Nathan R.

    2018-01-01

    The ability to distribute the precise time and frequency from an optical clock to remote platforms could enable future precise navigation and sensing systems. Here we demonstrate tight, real-time synchronization of a remote microwave clock to a master optical clock over a turbulent 4-km open air path via optical two-way time-frequency transfer. Once synchronized, the 10-GHz frequency signals generated at each site agree to 10−14 at one second and below 10−17 at 1000 seconds. In addition, the two clock times are synchronized to ±13 fs over an 8-hour period. The ability to phase-synchronize 10-GHz signals across platforms supports future distributed coherent sensing, while the ability to time-synchronize multiple microwave-based clocks to a high-performance master optical clock supports future precision navigation/timing systems. PMID:29607352

  15. Tight real-time synchronization of a microwave clock to an optical clock across a turbulent air path.

    PubMed

    Bergeron, Hugo; Sinclair, Laura C; Swann, William C; Nelson, Craig W; Deschênes, Jean-Daniel; Baumann, Esther; Giorgetta, Fabrizio R; Coddington, Ian; Newbury, Nathan R

    2016-04-01

    The ability to distribute the precise time and frequency from an optical clock to remote platforms could enable future precise navigation and sensing systems. Here we demonstrate tight, real-time synchronization of a remote microwave clock to a master optical clock over a turbulent 4-km open air path via optical two-way time-frequency transfer. Once synchronized, the 10-GHz frequency signals generated at each site agree to 10 -14 at one second and below 10 -17 at 1000 seconds. In addition, the two clock times are synchronized to ±13 fs over an 8-hour period. The ability to phase-synchronize 10-GHz signals across platforms supports future distributed coherent sensing, while the ability to time-synchronize multiple microwave-based clocks to a high-performance master optical clock supports future precision navigation/timing systems.

  16. Quantitation of Marek's disease and chicken anemia viruses in organs of experimentally infected chickens and commercial chickens by multiplex real-time PCR.

    PubMed

    Davidson, Irit; Raibshtein, I; Al-Touri, A

    2013-06-01

    The worldwide distribution of chicken anemia virus (CAV) and Marek's disease virus (MDV) is well documented. In addition to their economic significance in single- or dual-virus infections, the two viruses can often accompany various other pathogens and affect poultry health either directly, by causing tumors, anemia, and delayed growth, or indirectly, by aggravating other diseases, as a result of their immunosuppressive effects. After a decade of employing the molecular diagnosis of those viruses, which replaced conventional virus isolation, we present the development of a real-time multiplex PCR for the simultaneous detection of both viruses. The real-time PCRs for MDV and for CAV alone are more sensitive than the respective end-point PCRs. In addition, the multiplex real-time shows a similar sensitivity when compared to the single real-time PCR for each virus. The newly developed real-time multiplex PCR is of importance in terms of the diagnosis and detection of low copies of each virus, MDV and CAV in single- and in multiple-virus infections, and its applicability will be further evaluated.

  17. Real-time networked control of an industrial robot manipulator via discrete-time second-order sliding modes

    NASA Astrophysics Data System (ADS)

    Massimiliano Capisani, Luca; Facchinetti, Tullio; Ferrara, Antonella

    2010-08-01

    This article presents the networked control of a robotic anthropomorphic manipulator based on a second-order sliding mode technique, where the control objective is to track a desired trajectory for the manipulator. The adopted control scheme allows an easy and effective distribution of the control algorithm over two networked machines. While the predictability of real-time tasks execution is achieved by the Soft Hard Real-Time Kernel (S.Ha.R.K.) real-time operating system, the communication is established via a standard Ethernet network. The performances of the control system are evaluated under different experimental system configurations using, to perform the experiments, a COMAU SMART3-S2 industrial robot, and the results are analysed to put into evidence the robustness of the proposed approach against possible network delays, packet losses and unmodelled effects.

  18. Design and Analysis of Scheduling Policies for Real-Time Computer Systems

    DTIC Science & Technology

    1992-01-01

    C. M. Krishna, "The Impact of Workload on the Reliability of Real-Time Processor Triads," to appear in Micro . Rel. [17] J.F. Kurose, "Performance... Processor Triads", to appear in Micro . Rel. "* J.F. Kurose. "Performance Analysis of Minimum Laxity Scheduling in Discrete Time Queue- ing Systems", to...exponentially distributed service times and deadlines. A similar model was developed for the ED policy for a single processor system under identical

  19. Design of a Data Distribution Core Model for Seafloor Observatories in East China Sea

    NASA Astrophysics Data System (ADS)

    Chen, H.; Qin, R.; Xu, H.

    2017-12-01

    High loadings of nutrients and pollutants from agriculture, industries and city waste waters are carried by Changjiang (Yangtze) River and transformed into the foodweb in the river freshwater plume. Understanding these transport and transformation processes is essential for the ecosystem protection, fisheries resources management, seafood safety and human health. As Xiaoqushan Seafloor Observatory and Zhujiajian Seafloor Observatory built in East China Sea, it is an opportunity and a new way for the research of Changjiang River plume. Data collected by seafloor observatory should be accessed conveniently by end users in real time or near real time, which can make it play a better role. Therefore, data distribution is one of major issues for seafloor observatory characterized by long term, real time, high resolution and continuous observation. This study describes a Data Distribution core Model for Seafloor Observatories in East China Sea (ESDDM) containing Data Acquisition Module (DAM), Data Interpretation Module (DIM), Data Transmission Module (DTM) and Data Storage Module (DTM), which enables acquiring, interpreting, transmitting and storing various types of data in real time. A Data Distribution Model Makeup Language (DDML) based on XML is designed to enhance the expansibility and flexibility of the system implemented by ESDDM. Network sniffer is used to acquire data by IP address and port number in DAM promising to release the operating pressure of junction boxes. Data interface, core data processing plugins and common libraries consist of DIM helping it interpret data in a hot swapping way. DTM is an external module in ESDDM transmitting designated raw data packets to Secondary Receiver Terminal. The technology of database connection pool used in DSM facilitates the efficiency of large volumes of continuous data storage. Given a successful scenario in Zhujiajian Seafloor Observatory, the protosystem based on ESDDM running up to 1500h provides a reference for other seafloor observatories in data distribution service.

  20. Real-time high speed generator system emulation with hardware-in-the-loop application

    NASA Astrophysics Data System (ADS)

    Stroupe, Nicholas

    The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.

  1. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    PubMed Central

    Lam, William H. K.; Li, Qingquan

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978

  2. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    PubMed

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  3. Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant

    PubMed Central

    Moreno-Garcia, Isabel M.; Palacios-Garcia, Emilio J.; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J.; Varo-Martinez, Marta; Real-Calvo, Rafael J.

    2016-01-01

    There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant’s components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid. PMID:27240365

  4. Meeting the Challenge of Distributed Real-Time & Embedded (DRE) Systems

    DTIC Science & Technology

    2012-05-10

    IP RTOS Middleware Middleware Services DRE Applications Operating Sys & Protocols Hardware & Networks Middleware Middleware Services DRE...Services COTS & standards-based middleware, language, OS , network, & hardware platforms • Real-time CORBA (TAO) middleware • ADAPTIVE Communication...SPLs) F-15 product variant A/V 8-B product variant F/A 18 product variant UCAV product variant Software Produce-Line Hardware (CPU, Memory, I/O) OS

  5. Rotavirus genotype shifts among Swedish children and adults-Application of a real-time PCR genotyping.

    PubMed

    Andersson, Maria; Lindh, Magnus

    2017-11-01

    It is well known that human rotavirus group A is the most important cause of severe diarrhoea in infants and young children. Less is known about rotavirus infections in other age groups, and about how rotavirus genotypes change over time in different age groups. Develop a real-time PCR to easily genotype rotavirus strains in order to monitor the pattern of circulating genotypes. In this study, rotavirus strains in clinical samples from children and adults in Western Sweden during 2010-2014 were retrospectively genotyped by using specific amplification of VP 4 and VP 7 genes with a new developed real-rime PCR. A genotype was identified in 97% of 775 rotavirus strains. G1P[8] was the most common genotype representing 34.9%, followed by G2P[4] (28.3%), G9P[8] (11.5%), G3P[8] (8.1%), and G4P[8] (7.9%) The genotype distribution changed over time, from predominance of G1P[8] in 2010-2012 to predominance of G2P[4] in 2013-2014. There were also age-related differences, with G1P[8] being the most common genotype in children under 2 years (47.6%), and G2P[4] the most common in those over 70 years of age (46.1%.). The shift to G2P[4] in 2013-2014 was associated with a change in the age distribution, with a greater number of rotavirus positive cases in elderly than in children. By using a new real-time PCR method for genotyping we found that genotype distribution was age related and changed over time with a decreasing proportion of G1P[8]. Copyright © 2017. Published by Elsevier B.V.

  6. From Ship-To-Shore In Real Time: Data Transmission, Distribution, Management, Processing, And Archiving Using Telepresence Technologies And The Inner Space Center

    NASA Astrophysics Data System (ADS)

    Coleman, D. F.

    2012-12-01

    Most research vessels are equipped with satellite Internet services with bandwidths capable of being upgraded to support telepresence technologies and live shore-based participation. This capability can be used for real-time data transmission to shore, where it can be distributed, managed, processed, and archived. The University of Rhode Island Inner Space Center utilizes telepresence technologies and a growing network of command centers on Internet2 to participate live with a variety of research vessels and their ocean observing and sampling systems. High-bandwidth video streaming, voice-over-IP telecommunications, and real-time data feeds and file transfers enable users on shore to take part in the oceanographic expeditions as if they were present on the ship, working in the lab. Telepresence-enabled systematic ocean exploration and similar programs represent a significant and growing paradigm shift that can change the future of seagoing ocean observations using research vessels. The required platform is the ship itself, and users of the technology rely on the ship-based technical teams, but remote and distributed shore-based science users, students, educators, and the general public can now take part by being aboard virtually.

  7. A real-time diagnostic and performance monitor for UNIX. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dong, Hongchao

    1992-01-01

    There are now over one million UNIX sites and the pace at which new installations are added is steadily increasing. Along with this increase, comes a need to develop simple efficient, effective and adaptable ways of simultaneously collecting real-time diagnostic and performance data. This need exists because distributed systems can give rise to complex failure situations that are often un-identifiable with single-machine diagnostic software. The simultaneous collection of error and performance data is also important for research in failure prediction and error/performance studies. This paper introduces a portable method to concurrently collect real-time diagnostic and performance data on a distributed UNIX system. The combined diagnostic/performance data collection is implemented on a distributed multi-computer system using SUN4's as servers. The approach uses existing UNIX system facilities to gather system dependability information such as error and crash reports. In addition, performance data such as CPU utilization, disk usage, I/O transfer rate and network contention is also collected. In the future, the collected data will be used to identify dependability bottlenecks and to analyze the impact of failures on system performance.

  8. Pacific Northwest GridWise™ Testbed Demonstration Projects; Part I. Olympic Peninsula Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammerstrom, Donald J.; Ambrosio, Ron; Carlon, Teresa A.

    2008-01-09

    This report describes the implementation and results of a field demonstration wherein residential electric water heaters and thermostats, commercial building space conditioning, municipal water pump loads, and several distributed generators were coordinated to manage constrained feeder electrical distribution through the two-way communication of load status and electric price signals. The field demonstration took place in Washington and Oregon and was paid for by the U.S. Department of Energy and several northwest utilities. Price is found to be an effective control signal for managing transmission or distribution congestion. Real-time signals at 5-minute intervals are shown to shift controlled load in time.more » The behaviors of customers and their responses under fixed, time-of-use, and real-time price contracts are compared. Peak loads are effectively reduced on the experimental feeder. A novel application of portfolio theory is applied to the selection of an optimal mix of customer contract types.« less

  9. Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique

    NASA Technical Reports Server (NTRS)

    Wargo, M. J.; Witt, A. F.

    1992-01-01

    A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.

  10. Real-time identification of indoor pollutant source positions based on neural network locator of contaminant sources and optimized sensor networks.

    PubMed

    Vukovic, Vladimir; Tabares-Velasco, Paulo Cesar; Srebric, Jelena

    2010-09-01

    A growing interest in security and occupant exposure to contaminants revealed a need for fast and reliable identification of contaminant sources during incidental situations. To determine potential contaminant source positions in outdoor environments, current state-of-the-art modeling methods use computational fluid dynamic simulations on parallel processors. In indoor environments, current tools match accidental contaminant distributions with cases from precomputed databases of possible concentration distributions. These methods require intensive computations in pre- and postprocessing. On the other hand, neural networks emerged as a tool for rapid concentration forecasting of outdoor environmental contaminants such as nitrogen oxides or sulfur dioxide. All of these modeling methods depend on the type of sensors used for real-time measurements of contaminant concentrations. A review of the existing sensor technologies revealed that no perfect sensor exists, but intensity of work in this area provides promising results in the near future. The main goal of the presented research study was to extend neural network modeling from the outdoor to the indoor identification of source positions, making this technology applicable to building indoor environments. The developed neural network Locator of Contaminant Sources was also used to optimize number and allocation of contaminant concentration sensors for real-time prediction of indoor contaminant source positions. Such prediction should take place within seconds after receiving real-time contaminant concentration sensor data. For the purpose of neural network training, a multizone program provided distributions of contaminant concentrations for known source positions throughout a test building. Trained networks had an output indicating contaminant source positions based on measured concentrations in different building zones. A validation case based on a real building layout and experimental data demonstrated the ability of this method to identify contaminant source positions. Future research intentions are focused on integration with real sensor networks and model improvements for much more complicated contamination scenarios.

  11. Development of Virtual Airspace Simulation Technology - Real-Time (VAST-RT) Capability 2 and Experimental Plans

    NASA Technical Reports Server (NTRS)

    Lehmer, R.; Ingram, C.; Jovic, S.; Alderete, J.; Brown, D.; Carpenter, D.; LaForce, S.; Panda, R.; Walker, J.; Chaplin, P.; hide

    2006-01-01

    The Virtual Airspace Simulation Technology - Real-Time (VAST-RT) Project, an element cf NASA's Virtual Airspace Modeling and Simulation (VAMS) Project, has been developing a distributed simulation capability that supports an extensible and expandable real-time, human-in-the-loop airspace simulation environment. The VAST-RT system architecture is based on DoD High Level Architecture (HLA) and the VAST-RT HLA Toolbox, a common interface implementation that incorporates a number of novel design features. The scope of the initial VAST-RT integration activity (Capability 1) included the high-fidelity human-in-the-loop simulation facilities located at NASA/Ames Research Center and medium fidelity pseudo-piloted target generators, such as the Airspace Traffic Generator (ATG) being developed as part of VAST-RT, as well as other real-time tools. This capability has been demonstrated in a gate-to-gate simulation. VAST-RT's (Capability 2A) has been recently completed, and this paper will discuss the improved integration of the real-time assets into VAST-RT, including the development of tools to integrate data collected across the simulation environment into a single data set for the researcher. Current plans for the completion of the VAST-RT distributed simulation environment (Capability 2B) and its use to evaluate future airspace capacity enhancing concepts being developed by VAMS will be discussed. Additionally, the simulation environment's application to other airspace and airport research projects is addressed.

  12. Overview of EPA Research on Drinking Water Distribution System Nitrification

    EPA Science Inventory

    Results from USEPA research investigating drinking water distribution system nitrification will be presented. The two research areas include: (1) monochloramine disinfection kinetics of Nitrosomonas europaea using Propidium Monoazide Quantitative Real-time PCR (PMA-qPCR) and (2...

  13. Business Activity Monitoring: Real-Time Group Goals and Feedback Using an Overhead Scoreboard in a Distribution Center

    ERIC Educational Resources Information Center

    Goomas, David T.; Smith, Stuart M.; Ludwig, Timothy D.

    2011-01-01

    Companies operating large industrial settings often find delivering timely and accurate feedback to employees to be one of the toughest challenges they face in implementing performance management programs. In this report, an overhead scoreboard at a retailer's distribution center informed teams of order selectors as to how many tasks were…

  14. Adaptive Management of Computing and Network Resources for Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.

  15. Radar Imaging Using The Wigner-Ville Distribution

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; Kenny, Owen P.; Whitehouse, Harper J.

    1989-12-01

    The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. This paper first discusses the radar equation in terms of the time-frequency representation of the signal received from a radar system. It then presents a method of tomographic reconstruction for time-frequency images to estimate the scattering function of the aircraft. An optical archi-tecture is then discussed for the real-time implementation of the analysis method based on the WVD.

  16. Real-time determination of the efficacy of residual disinfection to limit wastewater contamination in a water distribution system using filtration-based luminescence.

    PubMed

    Lee, Jiyoung; Deininger, Rolf A

    2010-05-01

    Water distribution systems can be vulnerable to microbial contamination through cross-connections, wastewater backflow, the intrusion of soiled water after a loss of pressure resulting from an electricity blackout, natural disaster, or intentional contamination of the system in a bioterrrorism event. The most urgent matter a water treatment utility would face in this situation is detecting the presence and extent of a contamination event in real-time, so that immediate action can be taken to mitigate the problem. The current approved microbiological detection methods are culture-based plate count methods, which require incubation time (1 to 7 days). This long period of time would not be useful for the protection of public health. This study was designed to simulate wastewater intrusion in a water distribution system. The objectives were 2-fold: (1) real-time detection of water contamination, and (2) investigation of the sustainability of drinking water systems to suppress the contamination with secondary disinfectant residuals (chlorine and chloramine). The events of drinking water contamination resulting from a wastewater addition were determined by filtration-based luminescence assay. The water contamination was detected by luminescence method within 5 minutes. The signal amplification attributed to wastewater contamination was clear-102-fold signal increase. After 1 hour, chlorinated water could inactivate 98.8% of the bacterial contaminant, while chloraminated water reduced 77.2%.

  17. Application of ideal pressure distribution in development process of automobile seats.

    PubMed

    Kilincsoy, U; Wagner, A; Vink, P; Bubb, H

    2016-07-19

    In designing a car seat the ideal pressure distribution is important as it is the largest contact surface between the human and the car. Because of obstacles hindering a more general application of the ideal pressure distribution in seating design, multidimensional measuring techniques are necessary with extensive user tests. The objective of this study is to apply and integrate the knowledge about the ideal pressure distribution in the seat design process for a car manufacturer in an efficient way. Ideal pressure distribution was combined with pressure measurement, in this case pressure mats. In order to integrate this theoretical knowledge of seating comfort in the seat development process for a car manufacturer a special user interface was defined and developed. The mapping of the measured pressure distribution in real-time and accurately scaled to actual seats during test setups directly lead to design implications for seat design even during the test situation. Detailed analysis of the subject's feedback was correlated with objective measurements of the subject's pressure distribution in real time. Therefore existing seating characteristics were taken into account as well. A user interface can incorporate theoretical and validated 'state of the art' models of comfort. Consequently, this information can reduce extensive testing and lead to more detailed results in a shorter time period.

  18. Arranging computer architectures to create higher-performance controllers

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1988-01-01

    Techniques for integrating microprocessors, array processors, and other intelligent devices in control systems are reviewed, with an emphasis on the (re)arrangement of components to form distributed or parallel processing systems. Consideration is given to the selection of the host microprocessor, increasing the power and/or memory capacity of the host, multitasking software for the host, array processors to reduce computation time, the allocation of real-time and non-real-time events to different computer subsystems, intelligent devices to share the computational burden for real-time events, and intelligent interfaces to increase communication speeds. The case of a helicopter vibration-suppression and stabilization controller is analyzed as an example, and significant improvements in computation and throughput rates are demonstrated.

  19. Real-time Retrieving Atmospheric Parameters from Multi-GNSS Constellations

    NASA Astrophysics Data System (ADS)

    Li, X.; Zus, F.; Lu, C.; Dick, G.; Ge, M.; Wickert, J.; Schuh, H.

    2016-12-01

    The multi-constellation GNSS (e.g. GPS, GLONASS, Galileo, and BeiDou) bring great opportunities and challenges for real-time retrieval of atmospheric parameters for supporting numerical weather prediction (NWP) nowcasting or severe weather event monitoring. In this study, the observations from different GNSS are combined together for atmospheric parameter retrieving based on the real-time precise point positioning technique. The atmospheric parameters retrieved from multi-GNSS observations, including zenith total delay (ZTD), integrated water vapor (IWV), horizontal gradient (especially high-resolution gradient estimates) and slant total delay (STD), are carefully analyzed and evaluated by using the VLBI, radiosonde, water vapor radiometer and numerical weather model to independently validate the performance of individual GNSS and also demonstrate the benefits of multi-constellation GNSS for real-time atmospheric monitoring. Numerous results show that the multi-GNSS processing can provide real-time atmospheric products with higher accuracy, stronger reliability and better distribution, which would be beneficial for atmospheric sounding systems, especially for nowcasting of extreme weather.

  20. Real-time prediction of hand trajectory by ensembles of cortical neurons in primates

    NASA Astrophysics Data System (ADS)

    Wessberg, Johan; Stambaugh, Christopher R.; Kralik, Jerald D.; Beck, Pamela D.; Laubach, Mark; Chapin, John K.; Kim, Jung; Biggs, S. James; Srinivasan, Mandayam A.; Nicolelis, Miguel A. L.

    2000-11-01

    Signals derived from the rat motor cortex can be used for controlling one-dimensional movements of a robot arm. It remains unknown, however, whether real-time processing of cortical signals can be employed to reproduce, in a robotic device, the kind of complex arm movements used by primates to reach objects in space. Here we recorded the simultaneous activity of large populations of neurons, distributed in the premotor, primary motor and posterior parietal cortical areas, as non-human primates performed two distinct motor tasks. Accurate real-time predictions of one- and three-dimensional arm movement trajectories were obtained by applying both linear and nonlinear algorithms to cortical neuronal ensemble activity recorded from each animal. In addition, cortically derived signals were successfully used for real-time control of robotic devices, both locally and through the Internet. These results suggest that long-term control of complex prosthetic robot arm movements can be achieved by simple real-time transformations of neuronal population signals derived from multiple cortical areas in primates.

  1. SU-G-JeP3-10: Update On a Real-Time Treatment Guidance System Using An IR Navigation System for Pleural PDT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, M; Penjweini, R; Zhu, T

    Purpose: Photodynamic therapy (PDT) is used in conjunction with surgical debulking of tumorous tissue during treatment for pleural mesothelioma. One of the key components of effective PDT is uniform light distribution. Currently, light is monitored with 8 isotropic light detectors that are placed at specific locations inside the pleural cavity. A tracking system with real-time feedback software can be utilized to improve the uniformity of light in addition to the existing detectors. Methods: An infrared (IR) tracking camera is used to monitor the movement of the light source. The same system determines the pleural geometry of the treatment area. Softwaremore » upgrades allow visualization of the pleural cavity as a two-dimensional volume. The treatment delivery wand was upgraded for ease of light delivery while incorporating the IR system. Isotropic detector locations are also displayed. Data from the tracking system is used to calculate the light fluence rate delivered. This data is also compared with in vivo data collected via the isotropic detectors. Furthermore, treatment volume information will be used to form light dose volume histograms of the pleural cavity. Results: In a phantom study, the light distribution was improved by using real-time guidance compared to the distribution when using detectors without guidance. With the tracking system, 2D data can be collected regarding light fluence rather than just the 8 discrete locations inside the pleural cavity. Light fluence distribution on the entire cavity can be calculated at every time in the treatment. Conclusion: The IR camera has been used successfully during pleural PDT patient treatment to track the motion of the light source and provide real-time display of 2D light fluence. It is possible to use the feedback system to deliver a more uniform dose of light throughout the pleural cavity.« less

  2. Digital Image Support in the ROADNet Real-time Monitoring Platform

    NASA Astrophysics Data System (ADS)

    Lindquist, K. G.; Hansen, T. S.; Newman, R. L.; Vernon, F. L.; Nayak, A.; Foley, S.; Fricke, T.; Orcutt, J.; Rajasekar, A.

    2004-12-01

    The ROADNet real-time monitoring infrastructure has allowed researchers to integrate geophysical monitoring data from a wide variety of signal domains. Antelope-based data transport, relational-database buffering and archiving, backup/replication/archiving through the Storage Resource Broker, and a variety of web-based distribution tools create a powerful monitoring platform. In this work we discuss our use of the ROADNet system for the collection and processing of digital image data. Remote cameras have been deployed at approximately 32 locations as of September 2004, including the SDSU Santa Margarita Ecological Reserve, the Imperial Beach pier, and the Pinon Flats geophysical observatory. Fire monitoring imagery has been obtained through a connection to the HPWREN project. Near-real-time images obtained from the R/V Roger Revelle include records of seafloor operations by the JASON submersible, as part of a maintenance mission for the H2O underwater seismic observatory. We discuss acquisition mechanisms and the packet architecture for image transport via Antelope orbservers, including multi-packet support for arbitrarily large images. Relational database storage supports archiving of timestamped images, image-processing operations, grouping of related images and cameras, support for motion-detect triggers, thumbnail images, pre-computed video frames, support for time-lapse movie generation and storage of time-lapse movies. Available ROADNet monitoring tools include both orbserver-based display of incoming real-time images and web-accessible searching and distribution of images and movies driven by the relational database (http://mercali.ucsd.edu/rtapps/rtimbank.php). An extension to the Kepler Scientific Workflow System also allows real-time image display via the Ptolemy project. Custom time-lapse movies may be made from the ROADNet web pages.

  3. Embedded real-time operating system micro kernel design

    NASA Astrophysics Data System (ADS)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  4. Distribution Locational Real-Time Pricing Based Smart Building Control and Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Jun; Dai, Xiaoxiao; Zhang, Yingchen

    This paper proposes an real-virtual parallel computing scheme for smart building operations aiming at augmenting overall social welfare. The University of Denver's campus power grid and Ritchie fitness center is used for demonstrating the proposed approach. An artificial virtual system is built in parallel to the real physical system to evaluate the overall social cost of the building operation based on the social science based working productivity model, numerical experiment based building energy consumption model and the power system based real-time pricing mechanism. Through interactive feedback exchanged between the real and virtual system, enlarged social welfare, including monetary cost reductionmore » and energy saving, as well as working productivity improvements, can be achieved.« less

  5. Integration of domain and resource-based reasoning for real-time control in dynamic environments

    NASA Technical Reports Server (NTRS)

    Morgan, Keith; Whitebread, Kenneth R.; Kendus, Michael; Cromarty, Andrew S.

    1993-01-01

    A real-time software controller that successfully integrates domain-based and resource-based control reasoning to perform task execution in a dynamically changing environment is described. The design of the controller is based on the concept of partitioning the process to be controlled into a set of tasks, each of which achieves some process goal. It is assumed that, in general, there are multiple ways (tasks) to achieve a goal. The controller dynamically determines current goals and their current criticality, choosing and scheduling tasks to achieve those goals in the time available. It incorporates rule-based goal reasoning, a TMS-based criticality propagation mechanism, and a real-time scheduler. The controller has been used to build a knowledge-based situation assessment system that formed a major component of a real-time, distributed, cooperative problem solving system built under DARPA contract. It is also being employed in other applications now in progress.

  6. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    PubMed

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.

  7. Issues in visual support to real-time space system simulation solved in the Systems Engineering Simulator

    NASA Technical Reports Server (NTRS)

    Yuen, Vincent K.

    1989-01-01

    The Systems Engineering Simulator has addressed the major issues in providing visual data to its real-time man-in-the-loop simulations. Out-the-window views and CCTV views are provided by three scene systems to give the astronauts their real-world views. To expand the window coverage for the Space Station Freedom workstation a rotating optics system is used to provide the widest field of view possible. To provide video signals to as many viewpoints as possible, windows and CCTVs, with a limited amount of hardware, a video distribution system has been developed to time-share the video channels among viewpoints at the selection of the simulation users. These solutions have provided the visual simulation facility for real-time man-in-the-loop simulations for the NASA space program.

  8. Performance related issues in distributed database systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  9. Function Allocation in a Robust Distributed Real-Time Environment

    DTIC Science & Technology

    1991-12-01

    fundamental characteristic of a distributed system is its ability to map individual logical functions of an application program onto many physical nodes... how much of a node’s processor time is scheduled for function processing. IMC is the function- to -function communication required to facilitate...indicator of how much excess processor time a node has. The reconfiguration algorithms use these variables to determine the most appropriate node(s) to

  10. Real-time contingency handling in MAESTRO

    NASA Technical Reports Server (NTRS)

    Britt, Daniel L.; Geoffroy, Amy L.

    1992-01-01

    A scheduling and resource management system named MAESTRO was interfaced with a Space Station Module Power Management and Distribution (SSM/PMAD) breadboard at MSFC. The combined system serves to illustrate the integration of planning, scheduling, and control in a realistic, complex domain. This paper briefly describes the functional elements of the combined system, including normal and contingency operational scenarios, then focusses on the method used by the scheduler to handle real-time contingencies.

  11. Real-Time Data Filtering and Compression in Wide Area Simulation Networks

    DTIC Science & Technology

    1992-10-02

    Area Simulation Networks Achieving the real-time linkage among multiple , geographically-distant, local area networks that support distributed...November 1989, pp. 52-61. [IEEE85] IEEE/ANSI Standard 8802/3 "Carrier sense multiple access with collision detection (CSMA/CD) access method and...decoding/encoding of multiple bits. The hardware is programmable, easily adaptable and yields a high compression rate. A prototype 2-micron VLSI chip

  12. High-Throughput and Low-Latency Network Communication with NetIO

    NASA Astrophysics Data System (ADS)

    Schumacher, Jörn; Plessl, Christian; Vandelli, Wainer

    2017-10-01

    HPC network technologies like Infiniband, TrueScale or OmniPath provide low- latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS exclusively target the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs, but it requires a non-negligible effort and expert knowledge. At the same time, message services like ZeroMQ have gained popularity in the HEP community. They make it possible to build distributed applications with a high-level approach and provide good performance. Unfortunately, their usage usually limits developers to TCP/IP- based networks. While it is possible to operate a TCP/IP stack on top of Infiniband and OmniPath, this approach may not be very efficient compared to a direct use of native APIs. NetIO is a simple, novel asynchronous message service that can operate on Ethernet, Infiniband and similar network fabrics. In this paper the design and implementation of NetIO is presented and described, and its use is evaluated in comparison to other approaches. NetIO supports different high-level programming models and typical workloads of HEP applications. The ATLAS FELIX project [1] successfully uses NetIO as its central communication platform. The architecture of NetIO is described in this paper, including the user-level API and the internal data-flow design. The paper includes a performance evaluation of NetIO including throughput and latency measurements. The performance is compared against the state-of-the- art ZeroMQ message service. Performance measurements are performed in a lab environment with Ethernet and FDR Infiniband networks.

  13. Distributed cerebellar plasticity implements generalized multiple-scale memory components in real-robot sensorimotor tasks.

    PubMed

    Casellato, Claudia; Antonietti, Alberto; Garrido, Jesus A; Ferrigno, Giancarlo; D'Angelo, Egidio; Pedrocchi, Alessandra

    2015-01-01

    The cerebellum plays a crucial role in motor learning and it acts as a predictive controller. Modeling it and embedding it into sensorimotor tasks allows us to create functional links between plasticity mechanisms, neural circuits and behavioral learning. Moreover, if applied to real-time control of a neurorobot, the cerebellar model has to deal with a real noisy and changing environment, thus showing its robustness and effectiveness in learning. A biologically inspired cerebellar model with distributed plasticity, both at cortical and nuclear sites, has been used. Two cerebellum-mediated paradigms have been designed: an associative Pavlovian task and a vestibulo-ocular reflex, with multiple sessions of acquisition and extinction and with different stimuli and perturbation patterns. The cerebellar controller succeeded to generate conditioned responses and finely tuned eye movement compensation, thus reproducing human-like behaviors. Through a productive plasticity transfer from cortical to nuclear sites, the distributed cerebellar controller showed in both tasks the capability to optimize learning on multiple time-scales, to store motor memory and to effectively adapt to dynamic ranges of stimuli.

  14. Implicit Multibody Penalty-BasedDistributed Contact.

    PubMed

    Xu, Hongyi; Zhao, Yili; Barbic, Jernej

    2014-09-01

    The penalty method is a simple and popular approach to resolving contact in computer graphics and robotics. Penalty-based contact, however, suffers from stability problems due to the highly variable and unpredictable net stiffness, and this is particularly pronounced in simulations with time-varying distributed geometrically complex contact. We employ semi-implicit integration, exact analytical contact gradients, symbolic Gaussian elimination and a SVD solver to simulate stable penalty-based frictional contact with large, time-varying contact areas, involving many rigid objects and articulated rigid objects in complex conforming contact and self-contact. We also derive implicit proportional-derivative control forces for real-time control of articulated structures with loops. We present challenging contact scenarios such as screwing a hexbolt into a hole, bowls stacked in perfectly conforming configurations, and manipulating many objects using actively controlled articulated mechanisms in real time.

  15. Real-time Transmission and Distribution of NOAA Tail Doppler Radar Data and Other Data Products

    NASA Astrophysics Data System (ADS)

    Carswell, J.; Chang, P.; Robinson, D.; Gamache, J.; Hill, J.

    2011-12-01

    The NOAA WP-3D and G-IV aircraft have conducted and continue to conduct numerous research and operational measurement missions. However, typically only a fraction of the data collected aboard each flight is transmitted to the ground in near real-time utilizing low bandwidth satellite data links. The advancements in aircraft satellite phones have increased available bandwidth and reliability to a point where these systems can be utilized for near real-time data flow in support of decision making. A robust and flexible data delivery system has been developed by Remote Sensing Solutions with support from NOAA's National Environmental Satellite, Data and Information Service (NESDIS), Aircraft Operations Center (AOC) and Hurricane Forecast Improvement Project (HFIP). X-band Doppler/reflectivity measurements of tropical storms and cyclones collected from the NOAA WP-3D aircraft have been the most recent focus. Doppler measurements from volume backscatter precipitation profiles can provide critical observations of the horizontal winds as the precipitation advects with these winds. The data delivery system captures these profiles and send the radial Doppler profile observations to National Weather Service in near real-time over satellite communication data link. The design of this transmission system included features to enhance the reliability and robustness of the data flow from the P-3 aircraft to the end user. Routine real-time transmission, using this system, of the full resolution Tail Doppler Radar profile data to the ground and distribution to the NOAA's Hurricane Research Division for analysis and processing in support of initializing the operational HWRF model is planned. The end objective is to provide these Doppler profiles in a routine fashion to NWS and others in the forecasting community for operational utilization in support of hurricane forecasting and warning. Other data sources that are being collected and transmitted to the ground with this system for distribution in near real-time, include but are not limited to, the NOAA Lower Fuselage Radar reflectivity profiles, SFMR retrievals, flight level data, AXBT profiles and Imaging Wind and Rain Airborne Profiler data. The transmission and distribution of these data has a latency of only several seconds from initial acquisition on the aircraft to end users accessing the data through the Internet enabling end users to have a virtual seat on the aircraft and quick dissemination critical observations to the hurricane research, forecasting and modeling communities. In this presentation, the system capabilities and architecture will be described. Examples of the data products and data visualization tools (client applications) will be shown.

  16. Reengineering Real-Time Software Systems

    DTIC Science & Technology

    1993-09-09

    reengineering existing large-scale (or real-time) systems; systems designed prior to or during the advent of applied SE (Parnas 1979, Freeman 1980). Is... Advisor : Yutaka Kanayama Approved for public release; distribution is unlimited. 93-29769 93 12 6 098 Form Appmoved REPORT DOCUMENTATION PAGE 1o No. PI rep...trm b Idn 1o tl# caik t al wdornon s easnated to waere 1how per response. fr4ikcdm the time rem matnodons. siauide exetig da"a siuo a i and mami diqw

  17. Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenclik, Derek; Denholm, Paul; Chalamala, Babu

    For nearly a century, global power systems have focused on three key functions: generating, transmitting, and distributing electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load-despite variability in load on time scales ranging from subsecond disturbances to multiyear trends. With the increasing role of variable generation from wind and solar, the retirement of fossil-fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.

  18. The Accuracy of GBM GRB Localizations

    NASA Astrophysics Data System (ADS)

    Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.

    2010-03-01

    We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.

  19. Real-time dose calculation and visualization for the proton therapy of ocular tumours

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Karsten; Bendl, Rolf

    2001-03-01

    A new real-time dose calculation and visualization was developed as part of the new 3D treatment planning tool OCTOPUS for proton therapy of ocular tumours within a national research project together with the Hahn-Meitner Institut Berlin. The implementation resolves the common separation between parameter definition, dose calculation and evaluation and allows a direct examination of the expected dose distribution while adjusting the treatment parameters. The new tool allows the therapist to move the desired dose distribution under visual control in 3D to the appropriate place. The visualization of the resulting dose distribution as a 3D surface model, on any 2D slice or on the surface of specified ocular structures is done automatically when adapting parameters during the planning process. In addition, approximate dose volume histograms may be calculated with little extra time. The dose distribution is calculated and visualized in 200 ms with an accuracy of 6% for the 3D isodose surfaces and 8% for other objects. This paper discusses the advantages and limitations of this new approach.

  20. Near real-time estimation of ionosphere vertical total electron content from GNSS satellites using B-splines in a Kalman filter

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Schmidt, Michael; Seitz, Florian; Durmaz, Murat

    2017-02-01

    Although the number of terrestrial global navigation satellite system (GNSS) receivers supported by the International GNSS Service (IGS) is rapidly growing, the worldwide rather inhomogeneously distributed observation sites do not allow the generation of high-resolution global ionosphere products. Conversely, with the regionally enormous increase in highly precise GNSS data, the demands on (near) real-time ionosphere products, necessary in many applications such as navigation, are growing very fast. Consequently, many analysis centers accepted the responsibility of generating such products. In this regard, the primary objective of our work is to develop a near real-time processing framework for the estimation of the vertical total electron content (VTEC) of the ionosphere using proper models that are capable of a global representation adapted to the real data distribution. The global VTEC representation developed in this work is based on a series expansion in terms of compactly supported B-spline functions, which allow for an appropriate handling of the heterogeneous data distribution, including data gaps. The corresponding series coefficients and additional parameters such as differential code biases of the GNSS satellites and receivers constitute the set of unknown parameters. The Kalman filter (KF), as a popular recursive estimator, allows processing of the data immediately after acquisition and paves the way of sequential (near) real-time estimation of the unknown parameters. To exploit the advantages of the chosen data representation and the estimation procedure, the B-spline model is incorporated into the KF under the consideration of necessary constraints. Based on a preprocessing strategy, the developed approach utilizes hourly batches of GPS and GLONASS observations provided by the IGS data centers with a latency of 1 h in its current realization. Two methods for validation of the results are performed, namely the self consistency analysis and a comparison with Jason-2 altimetry data. The highly promising validation results allow the conclusion that under the investigated conditions our derived near real-time product is of the same accuracy level as the so-called final post-processed products provided by the IGS with a latency of several days or even weeks.

  1. Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS)

    NASA Astrophysics Data System (ADS)

    Daniels, M. D.; Graves, S. J.; Kerkez, B.; Chandrasekar, V.; Vernon, F.; Martin, C. L.; Maskey, M.; Keiser, K.; Dye, M. J.

    2015-12-01

    The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) project, funded as part of NSF's EarthCube initiative, addresses the ever-increasing importance of real-time scientific data, particularly in mission critical scenarios, where informed decisions must be made rapidly. Advances in the distribution of real-time data are leading many new transient phenomena in space-time to be observed, however, real-time decision-making is infeasible in many cases as these streaming data are either completely inaccessible or only available to proprietary in-house tools or displays. This lack of accessibility prohibits advanced algorithm and workflow development that could be initiated or enhanced by these data streams. Small research teams do not have resources to develop tools for the broad dissemination of their valuable real-time data and could benefit from an easy to use, scalable, cloud-based solution to facilitate access. CHORDS proposes to make a very diverse suite of real-time data available to the broader geosciences community in order to allow innovative new science in these areas to thrive. This presentation will highlight recently developed CHORDS portal tools and processing systems aimed at addressing some of the gaps in handling real-time data, particularly in the provisioning of data from the "long-tail" scientific community through a simple interface deployed in the cloud. The CHORDS system will connect these real-time streams via standard services from the Open Geospatial Consortium (OGC) and does so in a way that is simple and transparent to the data provider. Broad use of the CHORDS framework will expand the role of real-time data within the geosciences, and enhance the potential of streaming data sources to enable adaptive experimentation and real-time hypothesis testing. Adherence to community data and metadata standards will promote the integration of CHORDS real-time data with existing standards-compliant analysis, visualization and modeling tools.

  2. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    NASA Astrophysics Data System (ADS)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  3. Real-Time Aircraft Engine-Life Monitoring

    NASA Technical Reports Server (NTRS)

    Klein, Richard

    2014-01-01

    This project developed an inservice life-monitoring system capable of predicting the remaining component and system life of aircraft engines. The embedded system provides real-time, inflight monitoring of the engine's thrust, exhaust gas temperature, efficiency, and the speed and time of operation. Based upon this data, the life-estimation algorithm calculates the remaining life of the engine components and uses this data to predict the remaining life of the engine. The calculations are based on the statistical life distribution of the engine components and their relationship to load, speed, temperature, and time.

  4. A General theory of Signal Integration for Fault-Tolerant Dynamic Distributed Sensor Networks

    DTIC Science & Technology

    1993-10-01

    related to a) the architecture and fault- tolerance of the distributed sensor network, b) the proper synchronisation of sensor signals, c) the...Computational complexities of the problem of distributed detection. 5) Issues related to recording of events and synchronization in distributed sensor...Intervals for Synchronization in Real Time Distributed Systems", Submitted to Electronic Encyclopedia. 3. V. G. Hegde and S. S. Iyengar "Efficient

  5. Reduction technique of drop voltage and power losses to improve power quality using ETAP Power Station simulation model

    NASA Astrophysics Data System (ADS)

    Satrio, Reza Indra; Subiyanto

    2018-03-01

    The effect of electric loads growth emerged direct impact in power systems distribution. Drop voltage and power losses one of the important things in power systems distribution. This paper presents modelling approach used to restructrure electrical network configuration, reduce drop voltage, reduce power losses and add new distribution transformer to enhance reliability of power systems distribution. Restructrure electrical network was aimed to analyse and investigate electric loads of a distribution transformer. Measurement of real voltage and real current were finished two times for each consumer, that were morning period and night period or when peak load. Design and simulation were conduct by using ETAP Power Station Software. Based on result of simulation and real measurement precentage of drop voltage and total power losses were mismatch with SPLN (Standard PLN) 72:1987. After added a new distribution transformer and restructrured electricity network configuration, the result of simulation could reduce drop voltage from 1.3 % - 31.3 % to 8.1 % - 9.6 % and power losses from 646.7 watt to 233.29 watt. Result showed, restructrure electricity network configuration and added new distribution transformer can be applied as an effective method to reduce drop voltage and reduce power losses.

  6. The SSM/PMAD automated test bed project

    NASA Technical Reports Server (NTRS)

    Lollar, Louis F.

    1991-01-01

    The Space Station Module/Power Management and Distribution (SSM/PMAD) autonomous subsystem project was initiated in 1984. The project's goal has been to design and develop an autonomous, user-supportive PMAD test bed simulating the SSF Hab/Lab module(s). An eighteen kilowatt SSM/PMAD test bed model with a high degree of automated operation has been developed. This advanced automation test bed contains three expert/knowledge based systems that interact with one another and with other more conventional software residing in up to eight distributed 386-based microcomputers to perform the necessary tasks of real-time and near real-time load scheduling, dynamic load prioritizing, and fault detection, isolation, and recovery (FDIR).

  7. Skin contamination dosimeter

    DOEpatents

    Hamby, David M [Corvallis, OR; Farsoni, Abdollah T [Corvallis, OR; Cazalas, Edward [Corvallis, OR

    2011-06-21

    A technique and device provides absolute skin dosimetry in real time at multiple tissue depths simultaneously. The device uses a phoswich detector which has multiple scintillators embedded at different depths within a non-scintillating material. A digital pulse processor connected to the phoswich detector measures a differential distribution (dN/dH) of count rate N as function of pulse height H for signals from each of the multiple scintillators. A digital processor computes in real time from the differential count-rate distribution for each of multiple scintillators an estimate of an ionizing radiation dose delivered to each of multiple depths of skin tissue corresponding to the multiple scintillators embedded at multiple corresponding depths within the non-scintillating material.

  8. Communication and control in an integrated manufacturing system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Throne, Robert D.; Muthuswamy, Yogesh K.

    1987-01-01

    Typically, components in a manufacturing system are all centrally controlled. Due to possible communication bottlenecking, unreliability, and inflexibility caused by using a centralized controller, a new concept of system integration called an Integrated Multi-Robot System (IMRS) was developed. The IMRS can be viewed as a distributed real time system. Some of the current research issues being examined to extend the framework of the IMRS to meet its performance goals are presented. These issues include the use of communication coprocessors to enhance performance, the distribution of tasks and the methods of providing fault tolerance in the IMRS. An application example of real time collision detection, as it relates to the IMRS concept, is also presented and discussed.

  9. Real-time EEG-based detection of fatigue driving danger for accident prediction.

    PubMed

    Wang, Hong; Zhang, Chi; Shi, Tianwei; Wang, Fuwang; Ma, Shujun

    2015-03-01

    This paper proposes a real-time electroencephalogram (EEG)-based detection method of the potential danger during fatigue driving. To determine driver fatigue in real time, wavelet entropy with a sliding window and pulse coupled neural network (PCNN) were used to process the EEG signals in the visual area (the main information input route). To detect the fatigue danger, the neural mechanism of driver fatigue was analyzed. The functional brain networks were employed to track the fatigue impact on processing capacity of brain. The results show the overall functional connectivity of the subjects is weakened after long time driving tasks. The regularity is summarized as the fatigue convergence phenomenon. Based on the fatigue convergence phenomenon, we combined both the input and global synchronizations of brain together to calculate the residual amount of the information processing capacity of brain to obtain the dangerous points in real time. Finally, the danger detection system of the driver fatigue based on the neural mechanism was validated using accident EEG. The time distributions of the output danger points of the system have a good agreement with those of the real accident points.

  10. Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Nelson, Austin A; Prabakar, Kumaraguru

    As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time simulators and test PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a ruin & reconstruct methodology that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-timemore » digital testing platform. Smart PV inverters were added to the realtime model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the feeders could be analyzed.« less

  11. Real-time 3-D ultrafast ultrasound quasi-static elastography in vivo

    PubMed Central

    Papadacci, Clement; Bunting, Ethan A.; Konofagou, Elisa E.

    2017-01-01

    Ultrasound elastography, a technique used to assess mechanical properties of soft tissue is of major interest in the detection of breast cancer as it is stiffer than the surroundings. Techniques such as ultrasound quasi-static elastography have been developed to assess the strain distribution in soft tissues in two dimensions using a quasi-static compression. However, tumors can exhibit very heterogeneous shape, a three dimensions approach would be then necessary to measure accurately the tumor volume and remove operator dependency. To ensure this issue, several 3-D quasi-static elastographic approaches have been proposed. However, all these approaches suffered from a long acquisition time to acquire 3-D volumes resulting in the impossibility to perform real-time and the creation of artifacts. The long acquisition time comes from both the use of focused ultrasound emissions and the fact that the volume was made from a stack of two dimensions images acquired by mechanically translating an ultrasonic array. Being able to acquire volume at high volume rates is thus crucial to perform real-time with a simple freehand compression and to avoid signal decorrelation coming from hand motions or natural motions such as the respiratory. In this study we developed for the first time, the 3-D ultrafast ultrasound quasi-static elastography method to estimate 3-D axial strain distribution in vivo in real-time. Acquisitions were performed with a 2-D matrix array probe of 256 elements (16-by-16 elements). 100 plane waves were emitted at a volume rate of 100 volumes/sec during a continuous motorized compression. 3-D B-mode volumes and 3-D B-mode cumulative axial strain volumes were estimated on a two-layers gelatin phantom with different stiffness, in a stiff inclusion embedded in a soft gelatin phantoms, in a soft inclusion embedded in a stiff gelatin phantom and in an ex vivo canine liver before and after a high focused ultrasound (HIFU) ablation. In each case, we were able to image in real-time and in entire volumes the axial strain distribution and were able to detect the differences between stiff and soft structures with a good sensitivity. In addition, we were able to detect the stiff lesion in the ex vivo canine liver after HIFU ablation. Finally, we demonstrated the in vivo feasibility of the method using freehand compression on the calf of a human volunteer and were able to retrieve 3-D axial strain volume in real-time depicting the differences in stiffness of the two muscles which compose the calf. The 3-D ultrafast ultrasound quasi-static elastography method could have a major clinical impact for the real-time detection in three dimensions of breast cancer in patients using a simple freehand scanning. PMID:27483021

  12. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging.

    PubMed

    Tremsin, Anton S; Perrodin, Didier; Losko, Adrian S; Vogel, Sven C; Bourke, Mark A M; Bizarri, Gregory A; Bourret, Edith D

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  13. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    NASA Astrophysics Data System (ADS)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A. M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-04-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  14. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    PubMed Central

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A.M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-01-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes. PMID:28425461

  15. Adequacy of TRMM satellite rainfall data in driving the SWAT modeling of Tiaoxi catchment (Taihu lake basin, China)

    NASA Astrophysics Data System (ADS)

    Li, Dan; Christakos, George; Ding, Xinxin; Wu, Jiaping

    2018-01-01

    Spatial rainfall data is an essential input to Distributed Hydrological Models (DHM), and a significant contributor to hydrological model uncertainty. Model uncertainty is higher when rain gauges are sparse, as is often the case in practice. Currently, satellite-based precipitation products increasingly provide an alternative means to ground-based rainfall estimates, in which case a rigorous product assessment is required before implementation. Accordingly, the twofold objective of this work paper was the real-world assessment of both (a) the Tropical Rainfall Measuring Mission (TRMM) rainfall product using gauge data, and (b) the TRMM product's role in forcing data for hydrologic simulations in the area of the Tiaoxi catchment (Taihu lake basin, China). The TRMM rainfall products used in this study are the Version-7 real-time 3B42RT and the post-real-time 3B42. It was found that the TRMM rainfall data showed a superior performance at the monthly and annual scales, fitting well with surface observation-based frequency rainfall distributions. The Nash-Sutcliffe Coefficient of Efficiency (NSCE) and the relative bias ratio (BIAS) were used to evaluate hydrologic model performance. The satisfactory performance of the monthly runoff simulations in the Tiaoxi study supports the view that the implementation of real-time 3B42RT allows considerable room for improvement. At the same time, post-real-time 3B42 can be a valuable tool of hydrologic modeling, water balance analysis, and basin water resource management, especially in developing countries or at remote locations in which rainfall gauges are scarce.

  16. Integrating Distributed Interactive Simulations With the Project Darkstar Open-Source Massively Multiplayer Online Game (MMOG) Middleware

    DTIC Science & Technology

    2009-09-01

    be complete MMOG solutions such as Multiverse are not within the scope of this thesis, though it is recommended that readers compare this type of...software to the middleware described here ( Multiverse , 2009). 1. University of Munster: Real-Time Framework The Real-Time Framework (RTF) project is...10, 2009, from http://wiki.secondlife.com/wiki/MMOX Multiverse . (2009). Multiverse platform architecture. Retrieved September 9, 2009, from http

  17. C-130 Automated Digital Data System (CADDS)

    NASA Technical Reports Server (NTRS)

    Scofield, C. P.; Nguyen, Chien

    1991-01-01

    Real time airborne data acquisition, archiving and distribution on the NASA/Ames Research Center (ARC) C-130 has been improved over the past three years due to the implementation of the C-130 Automated Digital Data System (CADDS). CADDS is a real time, multitasking, multiprocessing ROM-based system. CADDS acquires data from both avionics and environmental sensors inflight for all C-130 data lines. The system also displays the data on video monitors throughout the aircraft.

  18. Lazy evaluation of FP programs: A data-flow approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Y.H.; Gaudiot, J.L.

    1988-12-31

    This paper presents a lazy evaluation system for the list-based functional language, Backus` FP in data-driven environment. A superset language of FP, called DFP (Demand-driven FP), is introduced. FP eager programs are transformed into DFP lazy programs which contain the notions of demands. The data-driven execution of DFP programs has the same effects of lazy evaluation. DFP lazy programs have the property of always evaluating a sufficient and necessary result. The infinite sequence generator is used to demonstrate the eager-lazy program transformation and the execution of the lazy programs.

  19. General-Purpose Electronic System Tests Aircraft

    NASA Technical Reports Server (NTRS)

    Glover, Richard D.

    1989-01-01

    Versatile digital equipment supports research, development, and maintenance. Extended aircraft interrogation and display system is general-purpose assembly of digital electronic equipment on ground for testing of digital electronic systems on advanced aircraft. Many advanced features, including multiple 16-bit microprocessors, pipeline data-flow architecture, advanced operating system, and resident software-development tools. Basic collection of software includes program for handling many types of data and for displays in various formats. User easily extends basic software library. Hardware and software interfaces to subsystems provided by user designed for flexibility in configuration to meet user's requirements.

  20. Eager protocol on a cache pipeline dataflow

    DOEpatents

    Ohmacht, Martin; Sugavanam, Krishnan

    2012-11-13

    A master device sends a request to communicate with a slave device to a switch. The master device waits for a period of cycles the switch takes to decide whether the master device can communicate with the slave device, and the master device sends data associated with the request to communicate at least after the period of cycles has passed since the master device sent the request to communicate to the switch without waiting to receive an acknowledgment from the switch that the master device can communicate with the slave device.

  1. Parallel Processing with Digital Signal Processing Hardware and Software

    NASA Technical Reports Server (NTRS)

    Swenson, Cory V.

    1995-01-01

    The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.

  2. Simulation of economic agents interaction in a trade chain

    NASA Astrophysics Data System (ADS)

    Gimanova, I. A.; Dulesov, A. S.; Litvin, N. V.

    2017-01-01

    The mathematical model of economic agents interaction is offered in the work. It allowsconsidering the change of price and sales volumesin dynamics according to the process of purchase and sale in the single-product market of the trade and intermediary network. The description of data-flow processes is based on the use of the continuous dynamic market model. The application of ordinary differential equations during the simulation allows one to define areas of coefficients - characteristics of agents - and to investigate their interaction in a chain on stability.

  3. Making real-time reactive systems reliable

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Wood, Mark

    1990-01-01

    A reactive system is characterized by a control program that interacts with an environment (or controlled program). The control program monitors the environment and reacts to significant events by sending commands to the environment. This structure is quite general. Not only are most embedded real time systems reactive systems, but so are monitoring and debugging systems and distributed application management systems. Since reactive systems are usually long running and may control physical equipment, fault tolerance is vital. The research tries to understand the principal issues of fault tolerance in real time reactive systems and to build tools that allow a programmer to design reliable, real time reactive systems. In order to make real time reactive systems reliable, several issues must be addressed: (1) How can a control program be built to tolerate failures of sensors and actuators. To achieve this, a methodology was developed for transforming a control program that references physical value into one that tolerates sensors that can fail and can return inaccurate values; (2) How can the real time reactive system be built to tolerate failures of the control program. Towards this goal, whether the techniques presented can be extended to real time reactive systems is investigated; and (3) How can the environment be specified in a way that is useful for writing a control program. Towards this goal, whether a system with real time constraints can be expressed as an equivalent system without such constraints is also investigated.

  4. Platform for real-time simulation of dynamic systems and hardware-in-the-loop for control algorithms.

    PubMed

    de Souza, Isaac D T; Silva, Sergio N; Teles, Rafael M; Fernandes, Marcelo A C

    2014-10-15

    The development of new embedded algorithms for automation and control of industrial equipment usually requires the use of real-time testing. However, the equipment required is often expensive, which means that such tests are often not viable. The objective of this work was therefore to develop an embedded platform for the distributed real-time simulation of dynamic systems. This platform, called the Real-Time Simulator for Dynamic Systems (RTSDS), could be applied in both industrial and academic environments. In industrial applications, the RTSDS could be used to optimize embedded control algorithms. In the academic sphere, it could be used to support research into new embedded solutions for automation and control and could also be used as a tool to assist in undergraduate and postgraduate teaching related to the development of projects concerning on-board control systems.

  5. Platform for Real-Time Simulation of Dynamic Systems and Hardware-in-the-Loop for Control Algorithms

    PubMed Central

    de Souza, Isaac D. T.; Silva, Sergio N.; Teles, Rafael M.; Fernandes, Marcelo A. C.

    2014-01-01

    The development of new embedded algorithms for automation and control of industrial equipment usually requires the use of real-time testing. However, the equipment required is often expensive, which means that such tests are often not viable. The objective of this work was therefore to develop an embedded platform for the distributed real-time simulation of dynamic systems. This platform, called the Real-Time Simulator for Dynamic Systems (RTSDS), could be applied in both industrial and academic environments. In industrial applications, the RTSDS could be used to optimize embedded control algorithms. In the academic sphere, it could be used to support research into new embedded solutions for automation and control and could also be used as a tool to assist in undergraduate and postgraduate teaching related to the development of projects concerning on-board control systems. PMID:25320906

  6. Real-Time Hardware-in-the-Loop Simulation of Ares I Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Tobbe, Patrick; Matras, Alex; Walker, David; Wilson, Heath; Fulton, Chris; Alday, Nathan; Betts, Kevin; Hughes, Ryan; Turbe, Michael

    2009-01-01

    The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory at the Marshall Space Flight Center. The primary purpose of the Ares System Integration Laboratory is to test the vehicle avionics hardware and software in a hardware - in-the-loop environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time simulation backbone to stimulate all required Ares components for verification testing. ARTE_VIIS provides high -fidelity dynamics, actuator, and sensor models to simulate an accurate flight trajectory in order to ensure realistic test conditions. ARTEMIS has been designed to take advantage of the advances in underlying computational power now available to support hardware-in-the-loop testing to achieve real-time simulation with unprecedented model fidelity. A modular realtime design relying on a fully distributed computing architecture has been implemented.

  7. Dynamic quality of service model for improving performance of multimedia real-time transmission in industrial networks.

    PubMed

    Gopalakrishnan, Ravichandran C; Karunakaran, Manivannan

    2014-01-01

    Nowadays, quality of service (QoS) is very popular in various research areas like distributed systems, multimedia real-time applications and networking. The requirements of these systems are to satisfy reliability, uptime, security constraints and throughput as well as application specific requirements. The real-time multimedia applications are commonly distributed over the network and meet various time constraints across networks without creating any intervention over control flows. In particular, video compressors make variable bit-rate streams that mismatch the constant-bit-rate channels typically provided by classical real-time protocols, severely reducing the efficiency of network utilization. Thus, it is necessary to enlarge the communication bandwidth to transfer the compressed multimedia streams using Flexible Time Triggered- Enhanced Switched Ethernet (FTT-ESE) protocol. FTT-ESE provides automation to calculate the compression level and change the bandwidth of the stream. This paper focuses on low-latency multimedia transmission over Ethernet with dynamic quality-of-service (QoS) management. This proposed framework deals with a dynamic QoS for multimedia transmission over Ethernet with FTT-ESE protocol. This paper also presents distinct QoS metrics based both on the image quality and network features. Some experiments with recorded and live video streams show the advantages of the proposed framework. To validate the solution we have designed and implemented a simulator based on the Matlab/Simulink, which is a tool to evaluate different network architecture using Simulink blocks.

  8. AMON: Transition to real-time operations

    NASA Astrophysics Data System (ADS)

    Cowen, D. F.; Keivani, A.; Tešić, G.

    2016-04-01

    The Astrophysical Multimessenger Observatory Network (AMON) will link the world's leading high-energy neutrino, cosmic-ray, gamma-ray and gravitational wave observatories by performing real-time coincidence searches for multimessenger sources from observatories' subthreshold data streams. The resulting coincidences will be distributed to interested parties in the form of electronic alerts for real-time follow-up observation. We will present the science case, design elements, current and projected partner observatories, status of the AMON project, and an initial AMON-enabled analysis. The prototype of the AMON server has been online since August 2014 and processing archival data. Currently, we are deploying new high-uptime servers and will be ready to start issuing alerts as early as winter 2015/16.

  9. A DICOM Based Collaborative Platform for Real-Time Medical Teleconsultation on Medical Images.

    PubMed

    Maglogiannis, Ilias; Andrikos, Christos; Rassias, Georgios; Tsanakas, Panayiotis

    2017-01-01

    The paper deals with the design of a Web-based platform for real-time medical teleconsultation on medical images. The proposed platform combines the principles of heterogeneous Workflow Management Systems (WfMSs), the peer-to-peer networking architecture and the SPA (Single-Page Application) concept, to facilitate medical collaboration among healthcare professionals geographically distributed. The presented work leverages state-of-the-art features of the web to support peer-to-peer communication using the WebRTC (Web Real Time Communication) protocol and client-side data processing for creating an integrated collaboration environment. The paper discusses the technical details of implementation and presents the operation of the platform in practice along with some initial results.

  10. Monitoring and Identifying in Real time Critical Patients Events.

    PubMed

    Chavez Mora, Emma

    2014-01-01

    Nowadays pervasive health care monitoring environments, as well as business activity monitoring environments, gather information from a variety of data sources. However it includes new challenges because of the use of body and wireless sensors, nontraditional operational and transactional sources. This makes the health data more difficult to monitor. Decision making in this environment is typically complex and unstructured as clinical work is essentially interpretative, multitasking, collaborative, distributed and reactive. Thus, the health care arena requires real time data management in areas such as patient monitoring, detection of adverse events and adaptive responses to operational failures. This research presents a new architecture that enables real time patient data management through the use of intelligent data sources.

  11. FRIEND Engine Framework: a real time neurofeedback client-server system for neuroimaging studies

    PubMed Central

    Basilio, Rodrigo; Garrido, Griselda J.; Sato, João R.; Hoefle, Sebastian; Melo, Bruno R. P.; Pamplona, Fabricio A.; Zahn, Roland; Moll, Jorge

    2015-01-01

    In this methods article, we present a new implementation of a recently reported FSL-integrated neurofeedback tool, the standalone version of “Functional Real-time Interactive Endogenous Neuromodulation and Decoding” (FRIEND). We will refer to this new implementation as the FRIEND Engine Framework. The framework comprises a client-server cross-platform solution for real time fMRI and fMRI/EEG neurofeedback studies, enabling flexible customization or integration of graphical interfaces, devices, and data processing. This implementation allows a fast setup of novel plug-ins and frontends, which can be shared with the user community at large. The FRIEND Engine Framework is freely distributed for non-commercial, research purposes. PMID:25688193

  12. Real-time feedback control of the plasma density profile on ASDEX Upgrade

    NASA Astrophysics Data System (ADS)

    Mlynek, A.; Reich, M.; Giannone, L.; Treutterer, W.; Behler, K.; Blank, H.; Buhler, A.; Cole, R.; Eixenberger, H.; Fischer, R.; Lohs, A.; Lüddecke, K.; Merkel, R.; Neu, G.; Ryter, F.; Zasche, D.; ASDEX Upgrade Team

    2011-04-01

    The spatial distribution of density in a fusion experiment is of significant importance as it enters in numerous analyses and contributes to the fusion performance. The reconstruction of the density profile is therefore commonly done in offline data analysis. In this paper, we present an algorithm which allows for density profile reconstruction from the data of the submillimetre interferometer and the magnetic equilibrium in real-time. We compare the obtained results to the profiles yielded by a numerically more complex offline algorithm. Furthermore, we present recent ASDEX Upgrade experiments in which we used the real-time density profile for active feedback control of the shape of the density profile.

  13. A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study

    NASA Astrophysics Data System (ADS)

    Onut, S.; Kamber, M. R.; Altay, G.

    2014-03-01

    Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.

  14. SimBOX: a scalable architecture for aggregate distributed command and control of spaceport and service constellation

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj

    2004-08-01

    In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.

  15. SimBox: a simulation-based scalable architecture for distributed command and control of spaceport and service constellations

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj

    2004-09-01

    In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.

  16. A Distributed Computing Framework for Real-Time Detection of Stress and of Its Propagation in a Team.

    PubMed

    Pandey, Parul; Lee, Eun Kyung; Pompili, Dario

    2016-11-01

    Stress is one of the key factor that impacts the quality of our daily life: From the productivity and efficiency in the production processes to the ability of (civilian and military) individuals in making rational decisions. Also, stress can propagate from one individual to other working in a close proximity or toward a common goal, e.g., in a military operation or workforce. Real-time assessment of the stress of individuals alone is, however, not sufficient, as understanding its source and direction in which it propagates in a group of people is equally-if not more-important. A continuous near real-time in situ personal stress monitoring system to quantify level of stress of individuals and its direction of propagation in a team is envisioned. However, stress monitoring of an individual via his/her mobile device may not always be possible for extended periods of time due to limited battery capacity of these devices. To overcome this challenge a novel distributed mobile computing framework is proposed to organize the resources in the vicinity and form a mobile device cloud that enables offloading of computation tasks in stress detection algorithm from resource constrained devices (low residual battery, limited CPU cycles) to resource rich devices. Our framework also supports computing parallelization and workflows, defining how the data and tasks divided/assigned among the entities of the framework are designed. The direction of propagation and magnitude of influence of stress in a group of individuals are studied by applying real-time, in situ analysis of Granger Causality. Tangible benefits (in terms of energy expenditure and execution time) of the proposed framework in comparison to a centralized framework are presented via thorough simulations and real experiments.

  17. Interactive reconstructions of cranial 3D implants under MeVisLab as an alternative to commercial planning software.

    PubMed

    Egger, Jan; Gall, Markus; Tax, Alois; Ücal, Muammer; Zefferer, Ulrike; Li, Xing; von Campe, Gord; Schäfer, Ute; Schmalstieg, Dieter; Chen, Xiaojun

    2017-01-01

    In this publication, the interactive planning and reconstruction of cranial 3D Implants under the medical prototyping platform MeVisLab as alternative to commercial planning software is introduced. In doing so, a MeVisLab prototype consisting of a customized data-flow network and an own C++ module was set up. As a result, the Computer-Aided Design (CAD) software prototype guides a user through the whole workflow to generate an implant. Therefore, the workflow begins with loading and mirroring the patients head for an initial curvature of the implant. Then, the user can perform an additional Laplacian smoothing, followed by a Delaunay triangulation. The result is an aesthetic looking and well-fitting 3D implant, which can be stored in a CAD file format, e.g. STereoLithography (STL), for 3D printing. The 3D printed implant can finally be used for an in-depth pre-surgical evaluation or even as a real implant for the patient. In a nutshell, our research and development shows that a customized MeVisLab software prototype can be used as an alternative to complex commercial planning software, which may also not be available in every clinic. Finally, not to conform ourselves directly to available commercial software and look for other options that might improve the workflow.

  18. Interactive reconstructions of cranial 3D implants under MeVisLab as an alternative to commercial planning software

    PubMed Central

    Egger, Jan; Gall, Markus; Tax, Alois; Ücal, Muammer; Zefferer, Ulrike; Li, Xing; von Campe, Gord; Schäfer, Ute; Schmalstieg, Dieter; Chen, Xiaojun

    2017-01-01

    In this publication, the interactive planning and reconstruction of cranial 3D Implants under the medical prototyping platform MeVisLab as alternative to commercial planning software is introduced. In doing so, a MeVisLab prototype consisting of a customized data-flow network and an own C++ module was set up. As a result, the Computer-Aided Design (CAD) software prototype guides a user through the whole workflow to generate an implant. Therefore, the workflow begins with loading and mirroring the patients head for an initial curvature of the implant. Then, the user can perform an additional Laplacian smoothing, followed by a Delaunay triangulation. The result is an aesthetic looking and well-fitting 3D implant, which can be stored in a CAD file format, e.g. STereoLithography (STL), for 3D printing. The 3D printed implant can finally be used for an in-depth pre-surgical evaluation or even as a real implant for the patient. In a nutshell, our research and development shows that a customized MeVisLab software prototype can be used as an alternative to complex commercial planning software, which may also not be available in every clinic. Finally, not to conform ourselves directly to available commercial software and look for other options that might improve the workflow. PMID:28264062

  19. Real-time monitoring of Lévy flights in a single quantum system

    NASA Astrophysics Data System (ADS)

    Issler, M.; Höller, J.; Imamoǧlu, A.

    2016-02-01

    Lévy flights are random walks where the dynamics is dominated by rare events. Even though they have been studied in vastly different physical systems, their observation in a single quantum system has remained elusive. Here we analyze a periodically driven open central spin system and demonstrate theoretically that the dynamics of the spin environment exhibits Lévy flights. For the particular realization in a single-electron charged quantum dot driven by periodic resonant laser pulses, we use Monte Carlo simulations to confirm that the long waiting times between successive nuclear spin-flip events are governed by a power-law distribution; the corresponding exponent η =-3 /2 can be directly measured in real time by observing the waiting time distribution of successive photon emission events. Remarkably, the dominant intrinsic limitation of the scheme arising from nuclear quadrupole coupling can be minimized by adjusting the magnetic field or by implementing spin echo.

  20. Formation Algorithms and Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward

    2004-01-01

    Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.

  1. Global distributions of ionospheric electric potentials for variable IMF conditions: climatology and near-real time specification

    NASA Astrophysics Data System (ADS)

    Kartalev, M. D.; Papitashvili, V. O.; Keremidarska, V. I.; Grigorov, K. G.; Romanov, D. K.

    2002-03-01

    We report a study of global climatology in the ionospheric electric potentials obtained from combining two algorithms used for mapping of high- and middle/low latitude ionospheric electrodynamics: the LiMIE (http://www.sprl.umich.edu/mist/limie.html) and IMEH (http://geospace.nat.bg) models, respectively. In this combination, the latter model utilizes high-latitude field-aligned current distributions provided by LiMIE for various IMF conditions and different seasons (summer, winter, equinox). For the testing purposes, we developed a Web-based interface which provides global distributions of the ionospheric electric potential in near-real time utilizing solar wind observations made onboard the NASA's ACE spacecraft upstream at L1. We discuss the electric potential global modeling over both the northern and southern hemispheres and consider some implications for the solar cycle studies and space weather forecasting.

  2. Dynamic ADMM for Real-Time Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhang, Yijian; Hong, Mingyi

    This paper considers distribution networks featuring distributed energy resources (DERs), and develops a dynamic optimization method to maximize given operational objectives in real time while adhering to relevant network constraints. The design of the dynamic algorithm is based on suitable linearization of the AC power flow equations, and it leverages the so-called alternating direction method of multipliers (ADMM). The steps of the ADMM, however, are suitably modified to accommodate appropriate measurements from the distribution network and the DERs. With the aid of these measurements, the resultant algorithm can enforce given operational constraints in spite of inaccuracies in the representation ofmore » the AC power flows, and it avoids ubiquitous metering to gather the state of noncontrollable resources. Optimality and convergence of the proposed algorithm are established in terms of tracking of the solution of a convex surrogate of the AC optimal power flow problem.« less

  3. Dynamic ADMM for Real-Time Optimal Power Flow: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhang, Yijian; Hong, Mingyi

    This paper considers distribution networks featuring distributed energy resources (DERs), and develops a dynamic optimization method to maximize given operational objectives in real time while adhering to relevant network constraints. The design of the dynamic algorithm is based on suitable linearizations of the AC power flow equations, and it leverages the so-called alternating direction method of multipliers (ADMM). The steps of the ADMM, however, are suitably modified to accommodate appropriate measurements from the distribution network and the DERs. With the aid of these measurements, the resultant algorithm can enforce given operational constraints in spite of inaccuracies in the representation ofmore » the AC power flows, and it avoids ubiquitous metering to gather the state of non-controllable resources. Optimality and convergence of the propose algorithm are established in terms of tracking of the solution of a convex surrogate of the AC optimal power flow problem.« less

  4. Hierarchical control framework for integrated coordination between distributed energy resources and demand response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Lian, Jianming; Sun, Yannan

    Demand response is representing a significant but largely untapped resource that can greatly enhance the flexibility and reliability of power systems. In this paper, a hierarchical control framework is proposed to facilitate the integrated coordination between distributed energy resources and demand response. The proposed framework consists of coordination and device layers. In the coordination layer, various resource aggregations are optimally coordinated in a distributed manner to achieve the system-level objectives. In the device layer, individual resources are controlled in real time to follow the optimal power generation or consumption dispatched from the coordination layer. For the purpose of practical applications,more » a method is presented to determine the utility functions of controllable loads by taking into account the real-time load dynamics and the preferences of individual customers. The effectiveness of the proposed framework is validated by detailed simulation studies.« less

  5. Safety of real-time convection-enhanced delivery of liposomes to primate brain: a long-term retrospective.

    PubMed

    Krauze, Michal T; Vandenberg, Scott R; Yamashita, Yoji; Saito, Ryuta; Forsayeth, John; Noble, Charles; Park, John; Bankiewicz, Krystof S

    2008-04-01

    Convection-enhanced delivery (CED) is gaining popularity in direct brain infusions. Our group has pioneered the use of liposomes loaded with the MRI contrast reagent as a means to track and quantitate CED in the primate brain through real-time MRI. When co-infused with therapeutic nanoparticles, these tracking liposomes provide us with unprecedented precision in the management of infusions into discrete brain regions. In order to translate real-time CED into clinical application, several important parameters must be defined. In this study, we have analyzed all our cumulative animal data to answer a number of questions as to whether real-time CED in primates depends on concentration of infusate, is reproducible, allows prediction of distribution in a given anatomic structure, and whether it has long term pathological consequences. Our retrospective analysis indicates that real-time CED is highly predictable; repeated procedures yielded identical results, and no long-term brain pathologies were found. We conclude that introduction of our technique to clinical application would enhance accuracy and patient safety when compared to current non-monitored delivery trials.

  6. Development of real-time PCR technique for the estimation of population density of Pythium intermedium in forest soils.

    PubMed

    Li, Mingzhu; Senda, Masako; Komatsu, Tsutomu; Suga, Haruhisa; Kageyama, Koji

    2010-10-20

    Pythium intermedium is known to play an important role in the carbon cycling of cool-temperate forest soils. In this study, a fast, precise and effective real-time PCR technique for estimating the population densities of P. intermedium from soils was developed using species-specific primers. Specificity was confirmed both with conventional PCR and real-time PCR. The detection limit (sensitivity) was determined and amplification standard curves were generated using SYBR Green II fluorescent dye. A rapid and accurate assay for quantification of P. intermedium in Takayama forest soils of Japan was developed using a combination of a new DNA extraction method and PCR primers were developed for real-time PCR. And the distribution of P. intermedium in forest soil was investigated with both soil plating method and the developed real-time PCR technique. This new technique will be a useful tool and can be applied to practical use for studying the role of Pythium species in forest and agricultural ecosystems. Copyright © 2009 Elsevier GmbH. All rights reserved.

  7. An IP-Based Software System for Real-time, Closed Loop, Multi-Spacecraft Mission Simulations

    NASA Technical Reports Server (NTRS)

    Cary, Everett; Davis, George; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis

    2003-01-01

    This viewgraph presentation provides information on the architecture of a computerized testbest for simulating Distributed Space Systems (DSS) for controlling spacecraft flying in formation. The presentation also discusses and diagrams the Distributed Synthesis Environment (DSE) for simulating and planning DSS missions.

  8. On-Site Determination and Monitoring of Real-Time Fluence Delivery for an Operating UV Reactor Based on a True Fluence Rate Detector.

    PubMed

    Li, Mengkai; Li, Wentao; Qiang, Zhimin; Blatchley, Ernest R

    2017-07-18

    At present, on-site fluence (distribution) determination and monitoring of an operating UV system represent a considerable challenge. The recently developed microfluorescent silica detector (MFSD) is able to measure the approximate true fluence rate (FR) at a fixed position in a UV reactor that can be compared with a FR model directly. Hence it has provided a connection between model calculation and real-time fluence determination. In this study, an on-site determination and monitoring method of fluence delivery for an operating UV reactor was developed. True FR detectors, a UV transmittance (UVT) meter, and a flow rate meter were used for fundamental measurements. The fluence distribution, as well as reduction equivalent fluence (REF), 10th percentile dose in the UV fluence distribution (F 10 ), minimum fluence (F min ), and mean fluence (F mean ) of a test reactor, was calculated in advance by the combined use of computational fluid dynamics and FR field modeling. A field test was carried out on the test reactor for disinfection of a secondary water supply. The estimated real-time REF, F 10 , F min , and F mean decreased 73.6%, 71.4%, 69.6%, and 72.9%, respectively, during a 6-month period, which was attributable to lamp output attenuation and sleeve fouling. The results were analyzed with synchronous data from a previously developed triparameter UV monitoring system and water temperature sensor. This study allowed demonstration of an accurate method for on-site, real-time fluence determination which could be used to enhance the security and public confidence of UV-based water treatment processes.

  9. Reconfiguration in Robust Distributed Real-Time Systems Based on Global Checkpoints

    DTIC Science & Technology

    1991-12-01

    achieved by utilizing distributed systems in which a single application program executes on multiple processors, connected to a network. The distributed...single application program executes on multiple proces- sors, connected to a network. The distributed nature of such systems make it possible to ...resident at every node. How - ever, the responsibility for execution of a particular function is assigned to only one node in this framework. This function

  10. Enabling Next-Generation Multicore Platforms in Embedded Applications

    DTIC Science & Technology

    2014-04-01

    mapping to sets 129 − 256 ) to the second page in memory, color 2 (sets 257 − 384) to the third page, and so on. Then, after the 32nd page, all 212 sets...the Real-Time Nested Locking Protocol (RNLP) [56], a recently developed multiprocessor real-time locking protocol that optimally supports the...RELEASE; DISTRIBUTION UNLIMITED 15 In general, the problems of optimally assigning tasks to processors and colors to tasks are both NP-hard in the

  11. Real-time monitoring and control of the plasma hearth process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Power, M.A.; Carney, K.P.; Peters, G.G.

    1996-05-01

    A distributed monitoring and control system is proposed for a plasma hearth, which will be used to decompose hazardous organic materials, encapsulate actinide waste in an obsidian-like slag, and reduce storage volume of actinide waste. The plasma hearth will be installed at ANL-West with the assistance of SAIC. Real-time monitoring of the off-gas system is accomplished using a Sun Workstation and embedded PCs. LabWindows/CVI software serves as the graphical user interface.

  12. Tracking Multiple People Online and in Real Time

    DTIC Science & Technology

    2015-12-21

    NO. 0704-0188 3. DATES COVERED (From - To) - UU UU UU UU 21-12-2015 Approved for public release; distribution is unlimited. Tracking multiple people ...online and in real time We cast the problem of tracking several people as a graph partitioning problem that takes the form of an NP-hard binary...PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. Duke University 2200 West Main Street Suite 710 Durham, NC 27705 -4010 ABSTRACT Tracking multiple

  13. Early clinical experience utilizing scintillator with optical fiber (SOF) detector in clinical boron neutron capture therapy: its issues and solutions.

    PubMed

    Ishikawa, Masayori; Yamamoto, Tetsuya; Matsumura, Akira; Hiratsuka, Junichi; Miyatake, Shin-Ichi; Kato, Itsuro; Sakurai, Yoshinori; Kumada, Hiroaki; Shrestha, Shubhechha J; Ono, Koji

    2016-08-09

    Real-time measurement of thermal neutrons in the tumor region is essential for proper evaluation of the absorbed dose in boron neutron capture therapy (BNCT) treatment. The gold wire activation method has been routinely used to measure the neutron flux distribution in BNCT irradiation, but a real-time measurement using gold wire is not possible. To overcome this issue, the scintillator with optical fiber (SOF) detector has been developed. The purpose of this study is to demonstrate the feasibility of the SOF detector as a real-time thermal neutron monitor in clinical BNCT treatment and also to report issues in the use of SOF detectors in clinical practice and their solutions. Clinical measurements using the SOF detector were carried out in 16 BNCT clinical trial patients from December 2002 until end of 2006 at the Japanese Atomic Energy Agency (JAEA) and Kyoto University Research Reactor Institute (KURRI). The SOF detector worked effectively as a real-time thermal neutron monitor. The neutron fluence obtained by the gold wire activation method was found to differ from that obtained by the SOF detector. The neutron fluence obtained by the SOF detector was in better agreement with the expected fluence than with gold wire activation. The estimation error for the SOF detector was small in comparison to the gold wire measurement. In addition, real-time monitoring suggested that the neutron flux distribution and intensity at the region of interest (ROI) may vary due to the reactor condition, patient motion and dislocation of the SOF detector. Clinical measurements using the SOF detector to measure thermal neutron flux during BNCT confirmed that SOF detectors are effective as a real-time thermal neutron monitor. To minimize the estimation error due to the displacement of the SOF probe during treatment, a loop-type SOF probe was developed.

  14. Probing amplitude, phase, and polarization of microwave field distributions in real time

    NASA Astrophysics Data System (ADS)

    King, R. J.; Yen, Y. H.

    1981-11-01

    A coherent (homodyne) detection system is used to map field distributions in real time. A key feature is the use of an electrically modulated (10-kHz) dipole scatterer which is also mechanically spun (150 Hz) to create an amplitude- and phase-modulated backscattered field. The system is monostatic. The backscattered field is coherently detected by mixing with the CW reference. A phase-insensitive detector is used, comprised of two balanced mixers which are fed in quadrature phase by one of the RF inputs followed by a phase quadrature combiner. The resulting amplitude and phase of the 10-kHz output are proportional to the square of the RF field component along the instantaneous axis of the spinning dipole. Both are measured simultaneously and independently in real time. From these, the polarization properties can also be found, so the field is uniquely described. The system's application to scanning the E-field transmitted through lossy, nonhomogeneous and anisotropic media (e.g., wood) is demonstrated. Other applications besides nondestructive testing are microwave vector holography, near-field antenna measurements, and inverse scattering.

  15. Performance of near real-time Global Satellite Mapping of Precipitation estimates during heavy precipitation events over northern China

    NASA Astrophysics Data System (ADS)

    Chen, Sheng; Hu, Junjun; Zhang, Asi; Min, Chao; Huang, Chaoying; Liang, Zhenqing

    2018-02-01

    This study assesses the performance of near real-time Global Satellite Mapping of Precipitation (GSMaP_NRT) estimates over northern China, including Beijing and its adjacent regions, during three heavy precipitation events from 21 July 2012 to 2 August 2012. Two additional near real-time satellite-based products, the Climate Prediction Center morphing method (CMORPH) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS), were used for parallel comparison with GSMaP_NRT. Gridded gauge observations were used as reference for a performance evaluation with respect to spatiotemporal variability, probability distribution of precipitation rate and volume, and contingency scores. Overall, GSMaP_NRT generally captures the spatiotemporal variability of precipitation and shows promising potential in near real-time mapping applications. GSMaP_NRT misplaced storm centers in all three storms. GSMaP_NRT demonstrated higher skill scores in the first high-impact storm event on 21 July 2015. GSMaP_NRT passive microwave only precipitation can generally capture the pattern of heavy precipitation distributions over flat areas but failed to capture the intensive rain belt over complicated mountainous terrain. The results of this study can be useful to both algorithm developers and the scientific end users, providing a better understanding of strengths and weaknesses to hydrologists using satellite precipitation products.

  16. Automatic Realistic Real Time Stimulation/Recording in Weakly Electric Fish: Long Time Behavior Characterization in Freely Swimming Fish and Stimuli Discrimination

    PubMed Central

    Forlim, Caroline G.; Pinto, Reynaldo D.

    2014-01-01

    Weakly electric fish are unique model systems in neuroethology, that allow experimentalists to non-invasively, access, central nervous system generated spatio-temporal electric patterns of pulses with roles in at least 2 complex and incompletely understood abilities: electrocommunication and electrolocation. Pulse-type electric fish alter their inter pulse intervals (IPIs) according to different behavioral contexts as aggression, hiding and mating. Nevertheless, only a few behavioral studies comparing the influence of different stimuli IPIs in the fish electric response have been conducted. We developed an apparatus that allows real time automatic realistic stimulation and simultaneous recording of electric pulses in freely moving Gymnotus carapo for several days. We detected and recorded pulse timestamps independently of the fish’s position for days. A stimulus fish was mimicked by a dipole electrode that reproduced the voltage time series of real conspecific according to previously recorded timestamp sequences. We characterized fish behavior and the eletrocommunication in 2 conditions: stimulated by IPIs pre-recorded from other fish and random IPI ones. All stimuli pulses had the exact Gymontus carapo waveform. All fish presented a surprisingly long transient exploratory behavior (more than 8 h) when exposed to a new environment in the absence of electrical stimuli. Further, we also show that fish are able to discriminate between real and random stimuli distributions by changing several characteristics of their IPI distribution. PMID:24400122

  17. Solid-State Multi-Sensor Array System for Real Time Imaging of Magnetic Fields and Ferrous Objects

    NASA Astrophysics Data System (ADS)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2008-02-01

    In this paper the development of a solid-state sensors based system for real-time imaging of magnetic fields and ferrous objects is described. The system comprises 1089 magneto inductive solid state sensors arranged in a 2D array matrix of 33×33 files and columns, equally spaced in order to cover an approximate area of 300 by 300 mm. The sensor array is located within a large current-carrying coil. Data is sampled from the sensors by several DSP controlling units and finally streamed to a host computer via a USB 2.0 interface and the image generated and displayed at a rate of 20 frames per minute. The development of the instrumentation has been complemented by extensive numerical modeling of field distribution patterns using boundary element methods. The system was originally intended for deployment in the non-destructive evaluation (NDE) of reinforced concrete. Nevertheless, the system is not only capable of producing real-time, live video images of the metal target embedded within any opaque medium, it also allows the real-time visualization and determination of the magnetic field distribution emitted by either permanent magnets or geometries carrying current. Although this system was initially developed for the NDE arena, it could also have many potential applications in many other fields, including medicine, security, manufacturing, quality assurance and design involving magnetic fields.

  18. Renewal processes based on generalized Mittag-Leffler waiting times

    NASA Astrophysics Data System (ADS)

    Cahoy, Dexter O.; Polito, Federico

    2013-03-01

    The fractional Poisson process has recently attracted experts from several fields of study. Its natural generalization of the ordinary Poisson process made the model more appealing for real-world applications. In this paper, we generalized the standard and fractional Poisson processes through the waiting time distribution, and showed their relations to an integral operator with a generalized Mittag-Leffler function in the kernel. The waiting times of the proposed renewal processes have the generalized Mittag-Leffler and stretched-squashed Mittag-Leffler distributions. Note that the generalizations naturally provide greater flexibility in modeling real-life renewal processes. Algorithms to simulate sample paths and to estimate the model parameters are derived. Note also that these procedures are necessary to make these models more usable in practice. State probabilities and other qualitative or quantitative features of the models are also discussed.

  19. The use of social media and mobile device applications to disseminate natural hazard information by Natural Resources Canada

    NASA Astrophysics Data System (ADS)

    Bird, A. L.; Ulmi, M.; Majewski, C.; Hayek, K.; Edwards, W.; McCormack, D. A.; Cole, R. T.; de Paor, D. R.

    2011-12-01

    Public expectation of near-instant and reliable information is constantly rising. Such expectation puts increasing demands on organizations charged with providing the public with information on hazard events in near-real-time, while ensuring quality and accuracy of content. Natural Resources Canada (NRCan) has responded by augmenting existing methods for earthquake information distribution with new and varied methods for relaying natural hazards information. We profile tools now employed operationally by NRCan to distribute earthquake information to emergency measures organizations, news media and the public. Also presented will be an example of a smart-'phone application which includes several tools for natural hazard preparedness and response, supplemented with automated real-time alerts.

  20. Reprint of “Performance analysis of a model-sized superconducting DC transmission system based VSC-HVDC transmission technologies using RTDS”

    NASA Astrophysics Data System (ADS)

    Dinh, Minh-Chau; Ju, Chang-Hyeon; Kim, Sung-Kyu; Kim, Jin-Geun; Park, Minwon; Yu, In-Keun

    2013-01-01

    The combination of a high temperature superconducting DC power cable and a voltage source converter based HVDC (VSC-HVDC) creates a new option for transmitting power with multiple collection and distribution points for long distance and bulk power transmissions. It offers some greater advantages compared with HVAC or conventional HVDC transmission systems, and it is well suited for the grid integration of renewable energy sources in existing distribution or transmission systems. For this reason, a superconducting DC transmission system based HVDC transmission technologies is planned to be set up in the Jeju power system, Korea. Before applying this system to a real power system on Jeju Island, system analysis should be performed through a real time test. In this paper, a model-sized superconducting VSC-HVDC system, which consists of a small model-sized VSC-HVDC connected to a 2 m YBCO HTS DC model cable, is implemented. The authors have performed the real-time simulation method that incorporates the model-sized superconducting VSC-HVDC system into the simulated Jeju power system using Real Time Digital Simulator (RTDS). The performance analysis of the superconducting VSC-HVDC systems has been verified by the proposed test platform and the results were discussed in detail.

  1. Performance analysis of a model-sized superconducting DC transmission system based VSC-HVDC transmission technologies using RTDS

    NASA Astrophysics Data System (ADS)

    Dinh, Minh-Chau; Ju, Chang-Hyeon; Kim, Sung-Kyu; Kim, Jin-Geun; Park, Minwon; Yu, In-Keun

    2012-08-01

    The combination of a high temperature superconducting DC power cable and a voltage source converter based HVDC (VSC-HVDC) creates a new option for transmitting power with multiple collection and distribution points for long distance and bulk power transmissions. It offers some greater advantages compared with HVAC or conventional HVDC transmission systems, and it is well suited for the grid integration of renewable energy sources in existing distribution or transmission systems. For this reason, a superconducting DC transmission system based HVDC transmission technologies is planned to be set up in the Jeju power system, Korea. Before applying this system to a real power system on Jeju Island, system analysis should be performed through a real time test. In this paper, a model-sized superconducting VSC-HVDC system, which consists of a small model-sized VSC-HVDC connected to a 2 m YBCO HTS DC model cable, is implemented. The authors have performed the real-time simulation method that incorporates the model-sized superconducting VSC-HVDC system into the simulated Jeju power system using Real Time Digital Simulator (RTDS). The performance analysis of the superconducting VSC-HVDC systems has been verified by the proposed test platform and the results were discussed in detail.

  2. High Resolution Sensing and Control of Urban Water Networks

    NASA Astrophysics Data System (ADS)

    Bartos, M. D.; Wong, B. P.; Kerkez, B.

    2016-12-01

    We present a framework to enable high-resolution sensing, modeling, and control of urban watersheds using (i) a distributed sensor network based on low-cost cellular-enabled motes, (ii) hydraulic models powered by a cloud computing infrastructure, and (iii) automated actuation valves that allow infrastructure to be controlled in real time. This platform initiates two major advances. First, we achieve a high density of measurements in urban environments, with an anticipated 40+ sensors over each urban area of interest. In addition to new measurements, we also illustrate the design and evaluation of a "smart" control system for real-world hydraulic networks. This control system improves water quality and mitigates flooding by using real-time hydraulic models to adaptively control releases from retention basins. We evaluate the potential of this platform through two ongoing deployments: (i) a flood monitoring network in the Dallas-Fort Worth metropolitan area that detects and anticipates floods at the level of individual roadways, and (ii) a real-time hydraulic control system in the city of Ann Arbor, MI—soon to be one of the most densely instrumented urban watersheds in the United States. Through these applications, we demonstrate that distributed sensing and control of water infrastructure can improve flash flood predictions, emergency response, and stormwater contaminant mitigation.

  3. Sparsity-based image monitoring of crystal size distribution during crystallization

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Huo, Yan; Ma, Cai Y.; Wang, Xue Z.

    2017-07-01

    To facilitate monitoring crystal size distribution (CSD) during a crystallization process by using an in-situ imaging system, a sparsity-based image analysis method is proposed for real-time implementation. To cope with image degradation arising from in-situ measurement subject to particle motion, solution turbulence, and uneven illumination background in the crystallizer, sparse representation of a real-time captured crystal image is developed based on using an in-situ image dictionary established in advance, such that the noise components in the captured image can be efficiently removed. Subsequently, the edges of a crystal shape in a captured image are determined in terms of the salience information defined from the denoised crystal images. These edges are used to derive a blur kernel for reconstruction of a denoised image. A non-blind deconvolution algorithm is given for the real-time reconstruction. Consequently, image segmentation can be easily performed for evaluation of CSD. The crystal image dictionary and blur kernels are timely updated in terms of the imaging conditions to improve the restoration efficiency. An experimental study on the cooling crystallization of α-type L-glutamic acid (LGA) is shown to demonstrate the effectiveness and merit of the proposed method.

  4. Real-Time X-Ray Transmission Microscopy of Solidifying Al-In Alloys

    NASA Technical Reports Server (NTRS)

    Curreri, Peter A.; Kaukler, William F.

    1997-01-01

    Real-time observations of transparent analog materials have provided insight, yet the results of these observations are not necessarily representative of opaque metallic systems. In order to study the detailed dynamics of the solidification process, we develop the technologies needed for real-time X ray microscopy of solidifying metallic systems, which has not previously been feasible with the necessary resolution, speed, and contrast. In initial studies of Al-In monotectic alloys unidirectionally solidified in an X-ray transparent furnace, in situ records of the evolution of interface morphologies, interfacial solute accumulation, and formation of the monotectic droplets were obtained for the first time: A radiomicrograph of Al-30In grown during aircraft parabolic maneuvers is presented, showing the volumetric phase distribution in this specimen. The benefits of using X-ray microscopy for postsolidification metallography include ease of specimen preparation, increased sensitivity, and three-dimensional analysis of phase distribution. Imaging of the solute boundary layer revealed that the isoconcentration lines are not parallel (as is often assumed) to the growth interface. Striations in the solidified crystal did not accurately decorate the interface position and shape. The monotectic composition alloy under some conditions grew in an uncoupled manner.

  5. Comprehensive seismic monitoring of the Cascadia megathrust with real-time GPS

    NASA Astrophysics Data System (ADS)

    Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.; Webb, F.

    2013-12-01

    We have developed a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone based on 1- and 5-second point position estimates computed within the ITRF08 reference frame. A Kalman filter stream editor that uses a geometry-free combination of phase and range observables to speed convergence while also producing independent estimation of carrier phase biases and ionosphere delay pre-cleans raw satellite measurements. These are then analyzed with GIPSY-OASIS using satellite clock and orbit corrections streamed continuously from the International GNSS Service (IGS) and the German Aerospace Center (DLR). The resulting RMS position scatter is less than 3 cm, and typical latencies are under 2 seconds. Currently 31 coastal Washington, Oregon, and northern California stations from the combined PANGA and PBO networks are analyzed. We are now ramping up to include all of the remaining 400+ stations currently operating throughout the Cascadia subduction zone, all of which are high-rate and telemetered in real-time to CWU. These receivers span the M9 megathrust, M7 crustal faults beneath population centers, several active Cascades volcanoes, and a host of other hazard sources. To use the point position streams for seismic monitoring, we have developed an inter-process client communication package that captures, buffers and re-broadcasts real-time positions and covariances to a variety of seismic estimation routines running on distributed hardware. An aggregator ingests, re-streams and can rebroadcast up to 24 hours of point-positions and resultant seismic estimates derived from the point positions to application clients distributed across web. A suite of seismic monitoring applications has also been written, which includes position time series analysis, instantaneous displacement vectors, and peak ground displacement contouring and mapping. We have also implemented a continuous estimation of finite-fault slip along the Cascadia megathrust using a NIF-type approach. This currently operates on the terrestrial GPS data streams, but could readily be expanded to use real-time offshore geodetic measurements as well. The continuous slip distributions are used in turn to compute tsunami excitation and, when convolved with pre-computed, hydrodynamic Green functions calculated using the COMCOT tsunami modeling software, run-up estimates for the entire Cascadia coastal margin. Finally, a suite of data visualization tools has been written to allow interaction with the real-time position streams and seismic estimates based on them, including time series plotting, instantaneous offset vectors, peak ground deformation contouring, finite-fault inversions, and tsunami run-up. This suite is currently bundled within a single client written in JAVA, called ';GPS Cockpit,' which is available for download.

  6. Optimization of active distribution networks: Design and analysis of significative case studies for enabling control actions of real infrastructure

    NASA Astrophysics Data System (ADS)

    Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca

    2014-12-01

    The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.

  7. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less

  8. A real-time data-acquisition and analysis system with distributed UNIX workstations

    NASA Astrophysics Data System (ADS)

    Yamashita, H.; Miyamoto, K.; Maruyama, K.; Hirosawa, H.; Nakayoshi, K.; Emura, T.; Sumi, Y.

    1996-02-01

    A compact data-acquisition system using three RISC/UNIX™ workstations (SUN™/SPARCstation™) with real-time capabilities of monitoring and analysis has been developed for the study of photonuclear reactions with the large-acceptance spectrometer TAGX. One workstation acquires data from memory modules in the front-end electronics (CAMAC and TKO) with a maximum speed of 300 Kbytes/s, where data size times instantaneous rate is 1 Kbyte × 300 Hz. Another workstation, which has real-time capability for run monitoring, gets the data with a buffer manager called NOVA. The third workstation analyzes the data and reconstructs the event. In addition to a general hardware and software description, priority settings and run control by shell scripts are described. This system has recently been used successfully in a two month long experiment.

  9. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    DOE PAGES

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; ...

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less

  10. Region Templates: Data Representation and Management for High-Throughput Image Analysis

    PubMed Central

    Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Klasky, Scott; Saltz, Joel

    2015-01-01

    We introduce a region template abstraction and framework for the efficient storage, management and processing of common data types in analysis of large datasets of high resolution images on clusters of hybrid computing nodes. The region template abstraction provides a generic container template for common data structures, such as points, arrays, regions, and object sets, within a spatial and temporal bounding box. It allows for different data management strategies and I/O implementations, while providing a homogeneous, unified interface to applications for data storage and retrieval. A region template application is represented as a hierarchical dataflow in which each computing stage may be represented as another dataflow of finer-grain tasks. The execution of the application is coordinated by a runtime system that implements optimizations for hybrid machines, including performance-aware scheduling for maximizing the utilization of computing devices and techniques to reduce the impact of data transfers between CPUs and GPUs. An experimental evaluation on a state-of-the-art hybrid cluster using a microscopy imaging application shows that the abstraction adds negligible overhead (about 3%) and achieves good scalability and high data transfer rates. Optimizations in a high speed disk based storage implementation of the abstraction to support asynchronous data transfers and computation result in an application performance gain of about 1.13×. Finally, a processing rate of 11,730 4K×4K tiles per minute was achieved for the microscopy imaging application on a cluster with 100 nodes (300 GPUs and 1,200 CPU cores). This computation rate enables studies with very large datasets. PMID:26139953

  11. Near real-time finite fault source inversion for moderate-large earthquakes in Taiwan using teleseismic P waveform

    NASA Astrophysics Data System (ADS)

    Wong, T. P.; Lee, S. J.; Gung, Y.

    2017-12-01

    Taiwan is located at one of the most active tectonic regions in the world. Rapid estimation of the spatial slip distribution of moderate-large earthquake (Mw6.0) is important for emergency response. It is necessary to have a real-time system to provide the report immediately after earthquake happen. The earthquake activities in the vicinity of Taiwan can be monitored by Real-Time Moment Tensor Monitoring System (RMT) which provides the rapid focal mechanism and source parameters. In this study, we follow up the RMT system to develop a near real-time finite fault source inversion system for the moderate-large earthquakes occurred in Taiwan. The system will be triggered by the RMT System when an Mw6.0 is detected. According to RMT report, our system automatically determines the fault dimension, record length, and rise time. We adopted one segment fault plane with variable rake angle. The generalized ray theory was applied to calculate the Green's function for each subfault. The primary objective of the system is to provide the first order image of coseismic slip pattern and identify the centroid location on the fault plane. The performance of this system had been demonstrated by 23 big earthquakes occurred in Taiwan successfully. The results show excellent data fits and consistent with the solutions from other studies. The preliminary spatial slip distribution will be provided within 25 minutes after an earthquake occurred.

  12. A Comparison Between Publish-and-Subscribe and Client-Server Models in Distributed Control System Networks

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)

    1998-01-01

    The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.

  13. Development of quantitative real-time PCR for detection and enumeration of Enterobacteriaceae.

    PubMed

    Takahashi, Hajime; Saito, Rumi; Miya, Satoko; Tanaka, Yuichiro; Miyamura, Natsumi; Kuda, Takashi; Kimura, Bon

    2017-04-04

    The family Enterobacteriaceae, members of which are widely distributed in the environment, includes many important human pathogens. In this study, a rapid real-time PCR method targeting rplP, coding for L16 protein, a component of the ribosome large subunit, was developed for enumerating Enterobacteriaceae strains, and its efficiency was evaluated using naturally contaminated food products. The rplP-targeted real-time PCR amplified Enterobacteriaceae species with Ct values of 14.0-22.8, whereas the Ct values for non-Enterobacteriaceae species were >30, indicating the specificity of this method for the Enterobacteriaceae. Using a calibration curve of Ct=-3.025 (log CFU/g)+37.35, which was calculated from individual plots of the cell numbers in different concentrations of 5 Enterobacteriaceae species, the rplP-targeted real-time PCR was applied to 51 food samples. A <1log difference between the real-time PCR and culture methods was obtained in a majority of the food samples (81.8%), with good correlation (r 2 =0.8285). This study demonstrated that the rplP-targeted real-time PCR method could detect and enumerate Enterobacteriaceae species in foods rapidly and accurately, and therefore, it can be used for the microbiological risk analysis of foods. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Fully distributed monitoring architecture supporting multiple trackees and trackers in indoor mobile asset management application.

    PubMed

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-03-21

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.

  15. TESTING AND VERIFICATION OF REAL-TIME WATER QUALITY MONITORING SENSORS IN A DISTRIBUTION SYSTEM AGAINST INTRODUCED CONTAMINATION

    EPA Science Inventory

    Drinking water distribution systems reach the majority of American homes, business and civic areas, and are therefore an attractive target for terrorist attack via direct contamination, or backflow events. Instrumental monitoring of such systems may be used to signal the prese...

  16. Second-Order Chlorine Decay and Trihalomethanes Formation in a Pilot-Scale Water Distribution Systems

    EPA Science Inventory

    It is well known that model-building of chlorine decay in real water distribution systems is difficult because chlorine decay is influenced by many factors (e.g., bulk water demand, pipe-wall demand, piping material, flow velocity, and residence time). In this paper, experiments ...

  17. The modelling of carbon-based supercapacitors: Distributions of time constants and Pascal Equivalent Circuits

    NASA Astrophysics Data System (ADS)

    Fletcher, Stephen; Kirkpatrick, Iain; Dring, Roderick; Puttock, Robert; Thring, Rob; Howroyd, Simon

    2017-03-01

    Supercapacitors are an emerging technology with applications in pulse power, motive power, and energy storage. However, their carbon electrodes show a variety of non-ideal behaviours that have so far eluded explanation. These include Voltage Decay after charging, Voltage Rebound after discharging, and Dispersed Kinetics at long times. In the present work, we establish that a vertical ladder network of RC components can reproduce all these puzzling phenomena. Both software and hardware realizations of the network are described. In general, porous carbon electrodes contain random distributions of resistance R and capacitance C, with a wider spread of log R values than log C values. To understand what this implies, a simplified model is developed in which log R is treated as a Gaussian random variable while log C is treated as a constant. From this model, a new family of equivalent circuits is developed in which the continuous distribution of log R values is replaced by a discrete set of log R values drawn from a geometric series. We call these Pascal Equivalent Circuits. Their behaviour is shown to resemble closely that of real supercapacitors. The results confirm that distributions of RC time constants dominate the behaviour of real supercapacitors.

  18. Evaluation of hybrid inverse planning and optimization (HIPO) algorithm for optimization in real-time, high-dose-rate (HDR) brachytherapy for prostate.

    PubMed

    Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley

    2013-07-08

    The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.

  19. A GPS-based Real-time Road Traffic Monitoring System

    NASA Astrophysics Data System (ADS)

    Tanti, Kamal Kumar

    In recent years, monitoring systems are astonishingly inclined towards ever more automatic; reliably interconnected, distributed and autonomous operation. Specifically, the measurement, logging, data processing and interpretation activities may be carried out by separate units at different locations in near real-time. The recent evolution of mobile communication devices and communication technologies has fostered a growing interest in the GIS & GPS-based location-aware systems and services. This paper describes a real-time road traffic monitoring system based on integrated mobile field devices (GPS/GSM/IOs) working in tandem with advanced GIS-based application software providing on-the-fly authentications for real-time monitoring and security enhancement. The described system is developed as a fully automated, continuous, real-time monitoring system that employs GPS sensors and Ethernet and/or serial port communication techniques are used to transfer data between GPS receivers at target points and a central processing computer. The data can be processed locally or remotely based on the requirements of client’s satisfaction. Due to the modular architecture of the system, other sensor types may be supported with minimal effort. Data on the distributed network & measurements are transmitted via cellular SIM cards to a Control Unit, which provides for post-processing and network management. The Control Unit may be remotely accessed via an Internet connection. The new system will not only provide more consistent data about the road traffic conditions but also will provide methods for integrating with other Intelligent Transportation Systems (ITS). For communication between the mobile device and central monitoring service GSM technology is used. The resulting system is characterized by autonomy, reliability and a high degree of automation.

  20. Real-time, in situ monitoring of nanoporation using electric field-induced acoustic signal

    NASA Astrophysics Data System (ADS)

    Zarafshani, Ali; Faiz, Rowzat; Samant, Pratik; Zheng, Bin; Xiang, Liangzhong

    2018-02-01

    The use of nanoporation in reversible or irreversible electroporation, e.g. cancer ablation, is rapidly growing. This technique uses an ultra-short and intense electric pulse to increase the membrane permeability, allowing non-permeant drugs and genes access to the cytosol via nanopores in the plasma membrane. It is vital to create a real-time in situ monitoring technique to characterize this process and answer the need created by the successful electroporation procedure of cancer treatment. All suggested monitoring techniques for electroporation currently are for pre-and post-stimulation exposure with no real-time monitoring during electric field exposure. This study was aimed at developing an innovative technology for real-time in situ monitoring of electroporation based on the typical cell exposure-induced acoustic emissions. The acoustic signals are the result of the electric field, which itself can be used in realtime to characterize the process of electroporation. We varied electric field distribution by varying the electric pulse from 1μ - 100ns and varying the voltage intensity from 0 - 1.2ܸ݇ to energize two electrodes in a bi-polar set-up. An ultrasound transducer was used for collecting acoustic signals around the subject under test. We determined the relative location of the acoustic signals by varying the position of the electrodes relative to the transducer and varying the electric field distribution between the electrodes to capture a variety of acoustic signals. Therefore, the electric field that is utilized in the nanoporation technique also produces a series of corresponding acoustic signals. This offers a novel imaging technique for the real-time in situ monitoring of electroporation that may directly improve treatment efficiency.

  1. Transactive control of fast-acting demand response based on thermostatic loads in real-time retail electricity markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned

    Coordinated operation of distributed thermostatic loads such as heat pumps and air conditioners can reduce energy costs and prevents grid congestion, while maintaining room temperatures in the comfort range set by consumers. This paper furthers efforts towards enabling thermostatically controlled loads (TCLs) to participate in real-time retail electricity markets under a transactive control paradigm. An agent-based approach is used to develop an effective and low complexity demand response control scheme for TCLs. The proposed scheme adjusts aggregated thermostatic loads according to real-time grid conditions under both heating and cooling modes. Here, a case study is presented showing the method reducesmore » consumer electricity costs by over 10% compared to uncoordinated operation.« less

  2. A SiPM based real time dosimeter for radiotherapic beams

    NASA Astrophysics Data System (ADS)

    Berra, A.; Conti, V.; Lietti, D.; Milan, L.; Novati, C.; Ostinelli, A.; Prest, M.; Romanó, C.; Vallazza, E.

    2015-02-01

    This paper describes the development of a scintillator dosimeter prototype for radiotherapic applications based on plastic scintillating fibers readout by Silicon PhotoMultipliers. The dosimeter, whose probes are water equivalent, could be used for quality control measurements, beam characterization and in vivo dosimetry, allowing a real time measurement of the dose spatial distribution. This paper describes the preliminary percentual depth dose scan performed with clinical 6 and 18 MV photon beams, comparing the results with a reference curve. The measurements were performed using a Varian Clinac iX linear accelerator at the Radiotherapy Department of the St. Anna Hospital in Como (IT). The prototype has given promising results, allowing real time measurements of relative dose without applying any correction factors.

  3. Transactive control of fast-acting demand response based on thermostatic loads in real-time retail electricity markets

    DOE PAGES

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...

    2017-07-29

    Coordinated operation of distributed thermostatic loads such as heat pumps and air conditioners can reduce energy costs and prevents grid congestion, while maintaining room temperatures in the comfort range set by consumers. This paper furthers efforts towards enabling thermostatically controlled loads (TCLs) to participate in real-time retail electricity markets under a transactive control paradigm. An agent-based approach is used to develop an effective and low complexity demand response control scheme for TCLs. The proposed scheme adjusts aggregated thermostatic loads according to real-time grid conditions under both heating and cooling modes. Here, a case study is presented showing the method reducesmore » consumer electricity costs by over 10% compared to uncoordinated operation.« less

  4. Real-Time Imaging with Frequency Scanning Array Antenna for Industrial Inspection Applications at W band

    NASA Astrophysics Data System (ADS)

    Larumbe, Belen; Laviada, Jaime; Ibáñez-Loinaz, Asier; Teniente, Jorge

    2018-01-01

    A real-time imaging system based on a frequency scanning antenna for conveyor belt setups is presented in this paper. The frequency scanning antenna together with an inexpensive parabolic reflector operates at the W band enabling the detection of details with dimensions in the order of 2 mm. In addition, a low level of sidelobes is achieved by optimizing unequal dividers to window the power distribution for sidelobe reduction. Furthermore, the quality of the images is enhanced by the radiation pattern properties. The performance of the system is validated by showing simulation as well as experimental results obtained in real time, proving the feasibility of these kinds of frequency scanning antennas for cost-effective imaging applications.

  5. Structural health monitoring of cylindrical bodies under impulsive hydrodynamic loading by distributed FBG strain measurements

    NASA Astrophysics Data System (ADS)

    Fanelli, Pierluigi; Biscarini, Chiara; Jannelli, Elio; Ubertini, Filippo; Ubertini, Stefano

    2017-02-01

    Various mechanical, ocean, aerospace and civil engineering problems involve solid bodies impacting the water surface and often result in complex coupled dynamics, characterized by impulsive loading conditions, high amplitude vibrations and large local deformations. Monitoring in such problems for purposes such as remaining fatigue life estimation and real time damage detection is a technical and scientific challenge of primary concern in this context. Open issues include the need for developing distributed sensing systems able to operate at very high acquisition frequencies, to be utilized to study rapidly varying strain fields, with high resolution and very low noise, while scientific challenges mostly relate to the definition of appropriate signal processing and modeling tools enabling the extraction of useful information from distributed sensing signals. Building on previous work by some of the authors, we propose an enhanced method for real time deformed shape reconstruction using distributed FBG strain measurements in curved bodies subjected to impulsive loading and we establish a new framework for applying this method for structural health monitoring purposes, as the main focus of the work. Experiments are carried out on a cylinder impacting the water at various speeds, proving improved performance in displacement reconstruction of the enhanced method compared to its previous version. A numerical study is then carried out considering the same physical problem with different delamination damages affecting the body. The potential for detecting, localizing and quantifying this damage using the reconstruction algorithm is thoroughly investigated. Overall, the results presented in the paper show the potential of distributed FBG strain measurements for real time structural health monitoring of curved bodies under impulsive hydrodynamic loading, defining damage sensitive features in terms of strain or displacement reconstruction errors at selected locations along the structure.

  6. Near real-time imaging of molasses injections using time-lapse electrical geophysics at the Brandywine DRMO, Brandywine, Maryland

    NASA Astrophysics Data System (ADS)

    Versteeg, R. J.; Johnson, T.; Major, B.; Day-Lewis, F. D.; Lane, J. W.

    2010-12-01

    Enhanced bioremediation, which involves introduction of amendments to promote biodegradation, increasingly is used to accelerate cleanup of recalcitrant compounds and has been identified as the preferred remedial treatment at many contaminated sites. Although blind introduction of amendments can lead to sub-optimal or ineffective remediation, the distribution of amendment throughout the treatment zone is difficult to measure using conventional sampling. Because amendments and their degradation products commonly have electrical properties that differ from those of ambient soil, time-lapse electrical geophysical monitoring has the potential to verify amendment emplacement and distribution. In order for geophysical monitoring to be useful, however, results of the injection ideally should be accessible in near real time. In August 2010, we demonstrated the feasibility of near real-time, autonomous electrical geophysical monitoring of amendment injections at the former Defense Reutilization and Marketing Office (DRMO) in Brandywine, Maryland. Two injections of about 1000 gallons each of molasses, a widely used amendment for enhanced bioremediation, were monitored using measurements taken with borehole and surface electrodes. During the injections, multi-channel resistance data were recorded; data were transmitted to a server and processed using a parallel resistivity inversion code; and results in the form of time-lapse imagery subsequently were posted to a website. This process occurred automatically without human intervention. The resulting time-lapse imagery clearly showed the evolution of the molasses plume. The delay between measurements and online delivery of images was between 45 and 60 minutes, thus providing actionable information that could support decisions about field procedures and a check on whether amendment reached target zones. This experiment demonstrates the feasibility of using electrical imaging as a monitoring tool both during amendment emplacement and post-injection to track amendment distribution, geochemical breakdown, and other remedial effects.

  7. End-User Applications of Real-Time Earthquake Information in Europe

    NASA Astrophysics Data System (ADS)

    Cua, G. B.; Gasparini, P.; Giardini, D.; Zschau, J.; Filangieri, A. R.; Reakt Wp7 Team

    2011-12-01

    The primary objective of European FP7 project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction) is to improve the efficiency of real-time earthquake risk mitigation methods and their capability of protecting structures, infrastructures, and populations. REAKT aims to address the issues of real-time earthquake hazard and response from end-to-end, with efforts directed along the full spectrum of methodology development in earthquake forecasting, earthquake early warning, and real-time vulnerability systems, through optimal decision-making, and engagement and cooperation of scientists and end users for the establishment of best practices for use of real-time information. Twelve strategic test cases/end users throughout Europe have been selected. This diverse group of applications/end users includes civil protection authorities, railway systems, hospitals, schools, industrial complexes, nuclear plants, lifeline systems, national seismic networks, and critical structures. The scale of target applications covers a wide range, from two school complexes in Naples, to individual critical structures, such as the Rion Antirion bridge in Patras, and the Fatih Sultan Mehmet bridge in Istanbul, to large complexes, such as the SINES industrial complex in Portugal and the Thessaloniki port area, to distributed lifeline and transportation networks and nuclear plants. Some end-users are interested in in-depth feasibility studies for use of real-time information and development of rapid response plans, while others intend to install real-time instrumentation and develop customized automated control systems. From the onset, REAKT scientists and end-users will work together on concept development and initial implementation efforts using the data products and decision-making methodologies developed with the goal of improving end-user risk mitigation. The aim of this scientific/end-user partnership is to ensure that scientific efforts are applicable to operational, real-world problems.

  8. Modeling solvation effects in real-space and real-time within density functional approaches

    NASA Astrophysics Data System (ADS)

    Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea

    2015-10-01

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  9. Modeling solvation effects in real-space and real-time within density functional approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgado, Alain; Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Calle 30 # 502, 11300 La Habana; Corni, Stefano

    2015-10-14

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that aremore » close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.« less

  10. Three dimensional stress vector sensor array and method therefor

    DOEpatents

    Pfeifer, Kent Bryant; Rudnick, Thomas Jeffery

    2005-07-05

    A sensor array is configured based upon capacitive sensor techniques to measure stresses at various positions in a sheet simultaneously and allow a stress map to be obtained in near real-time. The device consists of single capacitive elements applied in a one or two dimensional array to measure the distribution of stresses across a mat surface in real-time as a function of position for manufacturing and test applications. In-plane and normal stresses in rolling bodies such as tires may thus be monitored.

  11. Combining real-time monitoring and knowledge-based analysis in MARVEL

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.; Quan, A. G.; Angelino, R.; Veregge, J. R.

    1993-01-01

    Real-time artificial intelligence is gaining increasing attention for applications in which conventional software methods are unable to meet technology needs. One such application area is the monitoring and analysis of complex systems. MARVEL, a distributed monitoring and analysis tool with multiple expert systems, was developed and successfully applied to the automation of interplanetary spacecraft operations at NASA's Jet Propulsion Laboratory. MARVEL implementation and verification approaches, the MARVEL architecture, and the specific benefits that were realized by using MARVEL in operations are described.

  12. BRAIN initiative: fast and parallel solver for real-time monitoring of the eddy current in the brain for TMS applications.

    PubMed

    Sabouni, Abas; Pouliot, Philippe; Shmuel, Amir; Lesage, Frederic

    2014-01-01

    This paper introduce a fast and efficient solver for simulating the induced (eddy) current distribution in the brain during transcranial magnetic stimulation procedure. This solver has been integrated with MRI and neuronavigation software to accurately model the electromagnetic field and show eddy current in the head almost in real-time. To examine the performance of the proposed technique, we used a 3D anatomically accurate MRI model of the 25 year old female subject.

  13. Responsive systems - The challenge for the nineties

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw

    1990-01-01

    A concept of responsive computer systems will be introduced. The emerging responsive systems demand fault-tolerant and real-time performance in parallel and distributed computing environments. The design methodologies for fault-tolerant, real time and responsive systems will be presented. Novel techniques of introducing redundancy for improved performance and dependability will be illustrated. The methods of system responsiveness evaluation will be proposed. The issues of determinism, closed and open systems will also be discussed from the perspective of responsive systems design.

  14. Real-time distribution of pelagic fish: combining hydroacoustics, GIS and spatial modelling at a fine spatial scale.

    PubMed

    Muška, Milan; Tušer, Michal; Frouzová, Jaroslava; Mrkvička, Tomáš; Ricard, Daniel; Seďa, Jaromír; Morelli, Federico; Kubečka, Jan

    2018-03-29

    Understanding spatial distribution of organisms in heterogeneous environment remains one of the chief issues in ecology. Spatial organization of freshwater fish was investigated predominantly on large-scale, neglecting important local conditions and ecological processes. However, small-scale processes are of an essential importance for individual habitat preferences and hence structuring trophic cascades and species coexistence. In this work, we analysed the real-time spatial distribution of pelagic freshwater fish in the Římov Reservoir (Czechia) observed by hydroacoustics in relation to important environmental predictors during 48 hours at 3-h interval. Effect of diurnal cycle was revealed of highest significance in all spatial models with inverse trends between fish distribution and predictors in day and night in general. Our findings highlighted daytime pelagic fish distribution as highly aggregated, with general fish preferences for central, deep and highly illuminated areas, whereas nighttime distribution was more disperse and fish preferred nearshore steep sloped areas with higher depth. This turnover suggests prominent movements of significant part of fish assemblage between pelagic and nearshore areas on a diel basis. In conclusion, hydroacoustics, GIS and spatial modelling proved as valuable tool for predicting local fish distribution and elucidate its drivers, which has far reaching implications for understanding freshwater ecosystem functioning.

  15. Characteristics of service requests and service processes of fire and rescue service dispatch centers: analysis of real world data and the underlying probability distributions.

    PubMed

    Krueger, Ute; Schimmelpfeng, Katja

    2013-03-01

    A sufficient staffing level in fire and rescue dispatch centers is crucial for saving lives. Therefore, it is important to estimate the expected workload properly. For this purpose, we analyzed whether a dispatch center can be considered as a call center. Current call center publications very often model call arrivals as a non-homogeneous Poisson process. This bases on the underlying assumption of the caller's independent decision to call or not to call. In case of an emergency, however, there are often calls from more than one person reporting the same incident and thus, these calls are not independent. Therefore, this paper focuses on the dependency of calls in a fire and rescue dispatch center. We analyzed and evaluated several distributions in this setting. Results are illustrated using real-world data collected from a typical German dispatch center in Cottbus ("Leitstelle Lausitz"). We identified the Pólya distribution as being superior to the Poisson distribution in describing the call arrival rate and the Weibull distribution to be more suitable than the exponential distribution for interarrival times and service times. However, the commonly used distributions offer acceptable approximations. This is important for estimating a sufficient staffing level in practice using, e.g., the Erlang-C model.

  16. Data Quality Control of the French Permanent Broadband Network in the RESIF Framework.

    NASA Astrophysics Data System (ADS)

    Grunberg, M.; Lambotte, S.; Engels, F.

    2014-12-01

    In the framework of the RESIF (Réseau Sismologique et géodésique Français) project, a new information system is setting up, allowing the improvement of the management and the distribution of high quality data from the different elements of RESIF. Within this information system, EOST (in Strasbourg) is in charge of collecting real-time permanent broadband seismic waveform, and performing Quality Control on these data. The real-time and validated data set are pushed to the French National Distribution Center (Isterre/Grenoble) to make them publicly available. Furthermore EOST hosts the BCSF-ReNaSS, in charge of the French metropolitan seismic bulletin. This allows to benefit from some high-end quality control based on the national and world-wide seismicity. Here we present the real-time seismic data flow from the stations of the French National Broad Band Network to EOST, and then, the data Quality Control procedures that were recently installed, including some new developments.The data Quality Control consists in applying a variety of processes to check the consistency of the whole system from the stations to the data center. This allows us to verify that instruments and data transmission are operating correctly. Moreover, time quality is critical for most of the scientific data applications. To face this challenge and check the consistency of polarities and amplitudes, we deployed several high-end processes including a noise correlation procedure to check for timing accuracy (intrumental time errors result in a time-shift of the whole cross-correlation, clearly distinct from those due to change in medium physical properties), and a systematic comparison of synthetic and real data for teleseismic earthquakes of magnitude larger than 6.5 to detect timing errors as well as polarity and amplitude problems.

  17. Convection-enhanced delivery of MANF--volume of distribution analysis in porcine putamen and substantia nigra.

    PubMed

    Barua, N U; Bienemann, A S; Woolley, M; Wyatt, M J; Johnson, D; Lewis, O; Irving, C; Pritchard, G; Gill, S

    2015-10-15

    Mesencephalic astrocyte-derived neurotrophic factor (MANF) is a 20kDa human protein which has both neuroprotective and neurorestorative activity on dopaminergic neurons and therefore may have application for the treatment of Parkinson's Disease. The aims of this study were to determine the translational potential of convection-enhanced delivery (CED) of MANF for the treatment of PD by studying its distribution in porcine putamen and substantia nigra and to correlate histological distribution with co-infused gadolinium-DTPA using real-time magnetic resonance imaging. We describe the distribution of MANF in porcine putamen and substantia nigra using an implantable CED catheter system using co-infused gadolinium-DTPA to allow real-time MRI tracking of infusate distribution. The distribution of gadolinium-DTPA on MRI correlated well with immunohistochemical analysis of MANF distribution. Volumetric analysis of MANF IHC staining indicated a volume of infusion (Vi) to volume of distribution (Vd) ratio of 3 in putamen and 2 in substantia nigra. This study confirms the translational potential of CED of MANF as a novel treatment strategy in PD and also supports the co-infusion of gadolinium as a proxy measure of MANF distribution in future clinical studies. Further study is required to determine the optimum infusion regime, flow rate and frequency of infusions in human trials. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Distributed operating system for NASA ground stations

    NASA Technical Reports Server (NTRS)

    Doyle, John F.

    1987-01-01

    NASA ground stations are characterized by ever changing support requirements, so application software is developed and modified on a continuing basis. A distributed operating system was designed to optimize the generation and maintenance of those applications. Unusual features include automatic program generation from detailed design graphs, on-line software modification in the testing phase, and the incorporation of a relational database within a real-time, distributed system.

  19. Trans-oceanic Remote Power Hardware-in-the-Loop: Multi-site Hardware, Integrated Controller, and Electric Network Co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake R.; Palmintier, Bryan S.; Rowe, Daniel

    Electric system operators are increasingly concerned with the potential system-wide impacts of the large-scale integration of distributed energy resources (DERs) including voltage control, protection coordination, and equipment wear. This prompts a need for new simulation techniques that can simultaneously capture all the components of these large integrated smart grid systems. This paper describes a novel platform that combines three emerging research areas: power systems co-simulation, power hardware in the loop (PHIL) simulation, and lab-lab links. The platform is distributed, real-time capable, allows for easy internet-based connection from geographically-dispersed participants, and is software platform agnostic. We demonstrate its utility by studyingmore » real-time PHIL co-simulation of coordinated solar PV firming control of two inverters connected in multiple electric distribution network models, prototypical of U.S. and Australian systems. Here, the novel trans-pacific closed-loop system simulation was conducted in real-time using a power network simulator and physical PV/battery inverter at power at the National Renewable Energy Laboratory in Golden, CO, USA and a physical PV inverter at power at the Commonwealth Scientific and Industrial Research Organisation's Energy Centre in Newcastle, NSW, Australia. This capability enables smart grid researchers throughout the world to leverage their unique simulation capabilities for multi-site collaborations that can effectively simulate and validate emerging smart grid technology solutions.« less

  20. Trans-oceanic Remote Power Hardware-in-the-Loop: Multi-site Hardware, Integrated Controller, and Electric Network Co-simulation

    DOE PAGES

    Lundstrom, Blake R.; Palmintier, Bryan S.; Rowe, Daniel; ...

    2017-07-24

    Electric system operators are increasingly concerned with the potential system-wide impacts of the large-scale integration of distributed energy resources (DERs) including voltage control, protection coordination, and equipment wear. This prompts a need for new simulation techniques that can simultaneously capture all the components of these large integrated smart grid systems. This paper describes a novel platform that combines three emerging research areas: power systems co-simulation, power hardware in the loop (PHIL) simulation, and lab-lab links. The platform is distributed, real-time capable, allows for easy internet-based connection from geographically-dispersed participants, and is software platform agnostic. We demonstrate its utility by studyingmore » real-time PHIL co-simulation of coordinated solar PV firming control of two inverters connected in multiple electric distribution network models, prototypical of U.S. and Australian systems. Here, the novel trans-pacific closed-loop system simulation was conducted in real-time using a power network simulator and physical PV/battery inverter at power at the National Renewable Energy Laboratory in Golden, CO, USA and a physical PV inverter at power at the Commonwealth Scientific and Industrial Research Organisation's Energy Centre in Newcastle, NSW, Australia. This capability enables smart grid researchers throughout the world to leverage their unique simulation capabilities for multi-site collaborations that can effectively simulate and validate emerging smart grid technology solutions.« less

  1. Integrated photoacoustic/ultrasound/HFU system based on a clinical ultrasound imaging platform

    NASA Astrophysics Data System (ADS)

    Kim, Jeesu; Choi, Wonseok; Park, Eun-Yeong; Kim, Chulhong

    2018-02-01

    Non-invasive treatment of tumor is beneficial for the favorable prognosis of the patients. High Intensity Focused Ultrasound (HIFU) is an emerging non-invasive treatment tool that ablates tumor lesions by increasing local temperature without damaging surrounding tissues. In HIFU therapy, accurate focusing of the HIFU energy into the target lesion and real-time assessment of thermal distribution are critical for successful and safe treatment. Photoacoustic (PA) imaging is a novel biomedical imaging technique that can visualize functional information of biological tissues based on optical absorption and thermoelastic expansion. One unique feature of PA imaging is that the amplitude of the PA signal reflects the local temperature. Here, we demonstrate a real-time temperature monitoring system that can evaluate thermal distribution during HIFU therapy. We have integrated a HIFU treatment system, a clinical ultrasound (US) machine, and a tunable laser system and have acquired real-time PA/US images of in vitro phantoms and in vivo animals during HIFU therapy without interference from the therapeutic US waves. We have also evaluated the temperature monitoring capability of the system by comparing the amplitude of PA signals with the measured temperature in melanoma tumor bearing mice. Although much more updates are required for clinical applications, the results show the promising potential of the system to ensure accurate and safe HIFU therapy by monitoring the thermal distribution of the treatment area.

  2. The application of connectionism to query planning/scheduling in intelligent user interfaces

    NASA Technical Reports Server (NTRS)

    Short, Nicholas, Jr.; Shastri, Lokendra

    1990-01-01

    In the mid nineties, the Earth Observing System (EOS) will generate an estimated 10 terabytes of data per day. This enormous amount of data will require the use of sophisticated technologies from real time distributed Artificial Intelligence (AI) and data management. Without regard to the overall problems in distributed AI, efficient models were developed for doing query planning and/or scheduling in intelligent user interfaces that reside in a network environment. Before intelligent query/planning can be done, a model for real time AI planning and/or scheduling must be developed. As Connectionist Models (CM) have shown promise in increasing run times, a connectionist approach to AI planning and/or scheduling is proposed. The solution involves merging a CM rule based system to a general spreading activation model for the generation and selection of plans. The system was implemented in the Rochester Connectionist Simulator and runs on a Sun 3/260.

  3. Deduction of initial strategy distributions of agents in mix-game models

    NASA Astrophysics Data System (ADS)

    Gou, Chengling

    2006-11-01

    This paper reports the effort of deducing the initial strategy distributions (ISDs) of agents in mix-game models that is used to predict a real financial time series generated from a target financial market. Using mix-games to predict Shanghai Index, we find that the time series of prediction accurate rates is sensitive to the ISDs of agents in group 2 who play a minority game, but less sensitive to the ISDs of agents in group 1 who play a majority game. And agents in group 2 tend to cluster in full strategy space (FSS) if the real financial time series has obvious tendency (upward or downward), otherwise they tend to scatter in FSS. We also find that the ISDs and the number of agents in group 1 influence the level of prediction accurate rates. Finally, this paper gives suggestion about further research.

  4. Random walks on activity-driven networks with attractiveness

    NASA Astrophysics Data System (ADS)

    Alessandretti, Laura; Sun, Kaiyuan; Baronchelli, Andrea; Perra, Nicola

    2017-05-01

    Virtually all real-world networks are dynamical entities. In social networks, the propensity of nodes to engage in social interactions (activity) and their chances to be selected by active nodes (attractiveness) are heterogeneously distributed. Here, we present a time-varying network model where each node and the dynamical formation of ties are characterized by these two features. We study how these properties affect random-walk processes unfolding on the network when the time scales describing the process and the network evolution are comparable. We derive analytical solutions for the stationary state and the mean first-passage time of the process, and we study cases informed by empirical observations of social networks. Our work shows that previously disregarded properties of real social systems, such as heterogeneous distributions of activity and attractiveness as well as the correlations between them, substantially affect the dynamical process unfolding on the network.

  5. High performance real-time flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  6. Alternative majority-voting methods for real-time computing systems

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Dolter, James W.

    1989-01-01

    Two techniques that provide a compromise between the high time overhead in maintaining synchronous voting and the difficulty of combining results in asynchronous voting are proposed. These techniques are specifically suited for real-time applications with a single-source/single-sink structure that need instantaneous error masking. They provide a compromise between a tightly synchronized system in which the synchronization overhead can be quite high, and an asynchronous system which lacks suitable algorithms for combining the output data. Both quorum-majority voting (QMV) and compare-majority voting (CMV) are most applicable to distributed real-time systems with single-source/single-sink tasks. All real-time systems eventually have to resolve their outputs into a single action at some stage. The development of the advanced information processing system (AIPS) and other similar systems serve to emphasize the importance of these techniques. Time bounds suggest that it is possible to reduce the overhead for quorum-majority voting to below that for synchronous voting. All the bounds assume that the computation phase is nonpreemptive and that there is no multitasking.

  7. GNSS in real-time: Demonstration experiment at Berlin Airport International

    NASA Astrophysics Data System (ADS)

    Wickert, Jens; Dick, Galina; Ge, Maorong; Heise, Stefan; Li, XingXing; Ming, Shangguan; Nischan, Thomas; Ramatschi, Markus; Schuh, Harald; Alberding, Jürgen; Weigmann, Uwe

    2013-04-01

    Real-time (RT) applications are in focus of recent GNSS research. International activities related to the RT data collection and distribution, as well as provision of specific RT data products (e.g., satellite orbits and clocks, station coordinates) are coordinated within the Real-Time Project of the International GNSS Service (IGS). Currently IGS provides real-time data from more than 100 globally distributed GNSS ground stations. This number, in parallel with the extension of various additional international real-time networks, is continuously increasing. In parallel to the rapid development of GNSS RT activities also innovative geophysical applications were pioneered by GNSS research groups and institutions, including GFZ. One prominent example is the use of GNSS components in early warning systems. GNSS measurements can be used there for the rapid detection and characterization of deformation fields, related to earthquakes, which induce Tsunamis. Such deformation data cannot be provided by seismometer measurements, but are important for the prediction of the tsunami wave propagation caused by earthquakes. The GNSS real-time group at GFZ is involved in several research projects related to geophysical RT GNSS applications, and also operates one of the RT analysis centers of the IGS. We introduce results of a real-time GNSS demonstration project, which was performed in 2012 at the new Berlin International Airport BER at Schönefeld, south-east of Berlin city center. The main goal of the project was the demonstration of the functionality of a complex RT-PPP server-client solution for dynamic applications which was developed within a joint research project of GFZ and the company Alberding GmbH. Compared to the standard PPP (clock & orbit) this solution uses additional information (ionosphere, uncalibrated phase delays UPD) to increase the positioning accuracy and to reduce the convergence time. The major challenges of the experiment were the stable operation of the entire server-client system, the implementation of a mainly for scientific purposes developed software to a potentially commercial positioning solution, the real-time GNSS data management, and the generation and usage of the correction data. We evaluate the server-client system functionality and PPP results of the experiment in view of the project goals and indicate problems to be focused in future work. In addition, the GNSS data from a temporary ground station at the air-field was used to derive vertically integrated water vapor (IWV) data to demonstrate the potential of real-time water vapor data to improve the weather forecast at the airport. The IWV data are compared with measurements from nearby stations of the permanent German GNSS network for atmosphere sounding and with a water vapor radiometer, operated at GFZ.

  8. Online SVT Commissioning and Monitoring using a Service-Oriented Architecture Framework

    NASA Astrophysics Data System (ADS)

    Ruger, Justin; Gotra, Yuri; Weygand, Dennis; Ziegler, Veronique; Heddle, David; Gore, David

    2014-03-01

    Silicon Vertex Tracker detectors are devices used in high energy experiments for precision measurement of charged tracks close to the collision point. Early detection of faulty hardware is essential and therefore code development of monitoring and commissioning software is essential. The computing framework for the CLAS12 experiment at Jefferson Lab is a service-oriented architecture that allows efficient data-flow from one service to another through loose coupling. I will present the strategy and development of services for the CLAS12 Silicon Tracker data monitoring and commissioning within this framework, as well as preliminary results using test data.

  9. The CSM testbed matrix processors internal logic and dataflow descriptions

    NASA Technical Reports Server (NTRS)

    Regelbrugge, Marc E.; Wright, Mary A.

    1988-01-01

    This report constitutes the final report for subtask 1 of Task 5 of NASA Contract NAS1-18444, Computational Structural Mechanics (CSM) Research. This report contains a detailed description of the coded workings of selected CSM Testbed matrix processors (i.e., TOPO, K, INV, SSOL) and of the arithmetic utility processor AUS. These processors and the current sparse matrix data structures are studied and documented. Items examined include: details of the data structures, interdependence of data structures, data-blocking logic in the data structures, processor data flow and architecture, and processor algorithmic logic flow.

  10. Requirements Specification Language (RSL) and supporting tools

    NASA Technical Reports Server (NTRS)

    Frincke, Deborah; Wolber, Dave; Fisher, Gene; Cohen, Gerald C.

    1992-01-01

    This document describes a general purpose Requirement Specification Language (RSL). RSL is a hybrid of features found in several popular requirement specification languages. The purpose of RSL is to describe precisely the external structure of a system comprised of hardware, software, and human processing elements. To overcome the deficiencies of informal specification languages, RSL includes facilities for mathematical specification. Two RSL interface tools are described. The Browser view contains a complete document with all details of the objects and operations. The Dataflow view is a specialized, operation-centered depiction of a specification that shows how specified operations relate in terms of inputs and outputs.

  11. Droplet size distributions of adjuvant-amended sprays from an air-assisted five-port PWM nozzle

    USDA-ARS?s Scientific Manuscript database

    Verification of droplet size distributions is essential for the development of real-time variable-rate sprayers that synchronize spray outputs with canopy structures. Droplet sizes from a custom-designed, air-assisted, five-port nozzle coupled with a pulse-width-modulated (PWM) solenoid valve were m...

  12. Digitally controlled chirped pulse laser for sub-terahertz-range fiber structure interrogation.

    PubMed

    Chen, Zhen; Hefferman, Gerald; Wei, Tao

    2017-03-01

    This Letter reports a sweep velocity-locked laser pulse generator controlled using a digital phase-locked loop (DPLL) circuit. This design is used for the interrogation of sub-terahertz-range fiber structures for sensing applications that require real-time data collection with millimeter-level spatial resolution. A distributed feedback laser was employed to generate chirped laser pulses via injection current modulation. A DPLL circuit was developed to lock the optical frequency sweep velocity. A high-quality linearly chirped laser pulse with a frequency excursion of 117.69 GHz at an optical communication band was demonstrated. The system was further adopted to interrogate a continuously distributed sub-terahertz-range fiber structure (sub-THz-fs) for sensing applications. A strain test was conducted in which the sub-THz-fs showed a linear response to longitudinal strain change with predicted sensitivity. Additionally, temperature testing was conducted in which a heat source was used to generate a temperature distribution along the fiber structure to demonstrate its distributed sensing capability. A Gaussian temperature profile was measured using the described system and tracked in real time, as the heat source was moved.

  13. Real-time ArcGIS and heterotrophic plate count based chloramine disinfectant control in water distribution system.

    PubMed

    Bai, Xiaohui; Zhi, Xinghua; Zhu, Huifeng; Meng, Mingqun; Zhang, Mingde

    2015-01-01

    This study investigates the effect of chloramine residual on bacteria growth and regrowth and the relationship between heterotrophic plate counts (HPCs) and the concentration of chloramine residual in the Shanghai drinking water distribution system (DWDS). In this study, models to control HPCs in the water distribution system and consumer taps are also developed. Real-time ArcGIS was applied to show the distribution and changed results of the chloramine residual concentration in the pipe system by using these models. Residual regression analysis was used to get a reasonable range of the threshold values that allows the chloramine residual to efficiently inhibit bacteria growth in the Shanghai DWDS; the threshold values should be between 0.45 and 0.5 mg/L in pipe water and 0.2 and 0.25 mg/L in tap water. The low residual chloramine value (0.05 mg/L) of the Chinese drinking water quality standard may pose a potential health risk for microorganisms that should be improved. Disinfection by-products (DBPs) were detected, but no health risk was identified.

  14. Advanced algorithms for distributed fusion

    NASA Astrophysics Data System (ADS)

    Gelfand, A.; Smith, C.; Colony, M.; Bowman, C.; Pei, R.; Huynh, T.; Brown, C.

    2008-03-01

    The US Military has been undergoing a radical transition from a traditional "platform-centric" force to one capable of performing in a "Network-Centric" environment. This transformation will place all of the data needed to efficiently meet tactical and strategic goals at the warfighter's fingertips. With access to this information, the challenge of fusing data from across the batttlespace into an operational picture for real-time Situational Awareness emerges. In such an environment, centralized fusion approaches will have limited application due to the constraints of real-time communications networks and computational resources. To overcome these limitations, we are developing a formalized architecture for fusion and track adjudication that allows the distribution of fusion processes over a dynamically created and managed information network. This network will support the incorporation and utilization of low level tracking information within the Army Distributed Common Ground System (DCGS-A) or Future Combat System (FCS). The framework is based on Bowman's Dual Node Network (DNN) architecture that utilizes a distributed network of interlaced fusion and track adjudication nodes to build and maintain a globally consistent picture across all assets.

  15. Synthetic Foveal Imaging Technology

    NASA Technical Reports Server (NTRS)

    Nikzad, Shouleh (Inventor); Monacos, Steve P. (Inventor); Hoenk, Michael E. (Inventor)

    2013-01-01

    Apparatuses and methods are disclosed that create a synthetic fovea in order to identify and highlight interesting portions of an image for further processing and rapid response. Synthetic foveal imaging implements a parallel processing architecture that uses reprogrammable logic to implement embedded, distributed, real-time foveal image processing from different sensor types while simultaneously allowing for lossless storage and retrieval of raw image data. Real-time, distributed, adaptive processing of multi-tap image sensors with coordinated processing hardware used for each output tap is enabled. In mosaic focal planes, a parallel-processing network can be implemented that treats the mosaic focal plane as a single ensemble rather than a set of isolated sensors. Various applications are enabled for imaging and robotic vision where processing and responding to enormous amounts of data quickly and efficiently is important.

  16. Feedback mechanisms including real-time electronic alerts to achieve near 100% timely prophylactic antibiotic administration in surgical cases.

    PubMed

    Nair, Bala G; Newman, Shu-Fang; Peterson, Gene N; Wu, Wei-Ying; Schwid, Howard A

    2010-11-01

    Administration of prophylactic antibiotics during surgery is generally performed by the anesthesia providers. Timely antibiotic administration within the optimal time window before incision is critical for prevention of surgical site infections. However, this often becomes a difficult task for the anesthesia team during the busy part of a case when the patient is being anesthetized. Starting with the implementation of an anesthesia information management system (AIMS), we designed and implemented several feedback mechanisms to improve compliance of proper antibiotic delivery and documentation. This included generating e-mail feedback of missed documentation, distributing monthly summary reports, and generating real-time electronic alerts with a decision support system. In 20,974 surgical cases for the period, June 2008 to January 2010, the interventions of AIMS install, e-mail feedback, summary reports, and real-time alerts changed antibiotic compliance by -1.5%, 2.3%, 4.9%, and 9.3%, respectively, when compared with the baseline value of 90.0% ± 2.9% when paper anesthesia records were used. Highest antibiotic compliance was achieved when using real-time alerts. With real-time alerts, monthly compliance was >99% for every month between June 2009 and January 2010. Installation of AIMS itself did not improve antibiotic compliance over that achieved with paper anesthesia records. However, real-time guidance and reminders through electronic messages generated by a computerized decision support system (Smart Anesthesia Messenger, or SAM) significantly improved compliance. With such a system a consistent compliance of >99% was achieved.

  17. Real-time management of an urban groundwater well field threatened by pollution.

    PubMed

    Bauser, Gero; Franssen, Harrie-Jan Hendricks; Kaiser, Hans-Peter; Kuhlmann, Ulrich; Stauffer, Fritz; Kinzelbach, Wolfgang

    2010-09-01

    We present an optimal real-time control approach for the management of drinking water well fields. The methodology is applied to the Hardhof field in the city of Zurich, Switzerland, which is threatened by diffuse pollution. The risk of attracting pollutants is higher if the pumping rate is increased and can be reduced by increasing artificial recharge (AR) or by adaptive allocation of the AR. The method was first tested in offline simulations with a three-dimensional finite element variably saturated subsurface flow model for the period January 2004-August 2005. The simulations revealed that (1) optimal control results were more effective than the historical control results and (2) the spatial distribution of AR should be different from the historical one. Next, the methodology was extended to a real-time control method based on the Ensemble Kalman Filter method, using 87 online groundwater head measurements, and tested at the site. The real-time control of the well field resulted in a decrease of the electrical conductivity of the water at critical measurement points which indicates a reduced inflow of water originating from contaminated sites. It can be concluded that the simulation and the application confirm the feasibility of the real-time control concept.

  18. Combined use of real-time PCR and nested sequence-based typing in survey of human Legionella infection.

    PubMed

    Qin, T; Zhou, H; Ren, H; Shi, W; Jin, H; Jiang, X; Xu, Y; Zhou, M; Li, J; Wang, J; Shao, Z; Xu, X

    2016-07-01

    Legionnaires' disease (LD) is a globally distributed systemic infectious disease. The burden of LD in many regions is still unclear, especially in Asian countries including China. A survey of Legionella infection using real-time PCR and nested sequence-based typing (SBT) was performed in two hospitals in Shanghai, China. A total of 265 bronchoalveolar lavage fluid (BALF) specimens were collected from hospital A between January 2012 and December 2013, and 359 sputum specimens were collected from hospital B throughout 2012. A total of 71 specimens were positive for Legionella according to real-time PCR focusing on the 5S rRNA gene. Seventy of these specimens were identified as Legionella pneumophila as a result of real-time PCR amplification of the dotA gene. Results of nested SBT revealed high genetic polymorphism in these L. pneumophila and ST1 was the predominant sequence type. These data revealed that the burden of LD in China is much greater than that recognized previously, and real-time PCR may be a suitable monitoring technology for LD in large sample surveys in regions lacking the economic and technical resources to perform other methods, such as urinary antigen tests and culture methods.

  19. Building occupancy simulation and data assimilation using a graph-based agent-oriented model

    NASA Astrophysics Data System (ADS)

    Rai, Sanish; Hu, Xiaolin

    2018-07-01

    Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.

  20. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    PubMed Central

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  1. Real-Time Safety Risk Assessment Based on a Real-Time Location System for Hydropower Construction Sites

    PubMed Central

    Fan, Qixiang; Qiang, Maoshan

    2014-01-01

    The concern for workers' safety in construction industry is reflected in many studies focusing on static safety risk identification and assessment. However, studies on real-time safety risk assessment aimed at reducing uncertainty and supporting quick response are rare. A method for real-time safety risk assessment (RTSRA) to implement a dynamic evaluation of worker safety states on construction site has been proposed in this paper. The method provides construction managers who are in charge of safety with more abundant information to reduce the uncertainty of the site. A quantitative calculation formula, integrating the influence of static and dynamic hazards and that of safety supervisors, is established to link the safety risk of workers with the locations of on-site assets. By employing the hidden Markov model (HMM), the RTSRA provides a mechanism for processing location data provided by the real-time location system (RTLS) and analyzing the probability distributions of different states in terms of false positives and negatives. Simulation analysis demonstrated the logic of the proposed method and how it works. Application case shows that the proposed RTSRA is both feasible and effective in managing construction project safety concerns. PMID:25114958

  2. Real-time safety risk assessment based on a real-time location system for hydropower construction sites.

    PubMed

    Jiang, Hanchen; Lin, Peng; Fan, Qixiang; Qiang, Maoshan

    2014-01-01

    The concern for workers' safety in construction industry is reflected in many studies focusing on static safety risk identification and assessment. However, studies on real-time safety risk assessment aimed at reducing uncertainty and supporting quick response are rare. A method for real-time safety risk assessment (RTSRA) to implement a dynamic evaluation of worker safety states on construction site has been proposed in this paper. The method provides construction managers who are in charge of safety with more abundant information to reduce the uncertainty of the site. A quantitative calculation formula, integrating the influence of static and dynamic hazards and that of safety supervisors, is established to link the safety risk of workers with the locations of on-site assets. By employing the hidden Markov model (HMM), the RTSRA provides a mechanism for processing location data provided by the real-time location system (RTLS) and analyzing the probability distributions of different states in terms of false positives and negatives. Simulation analysis demonstrated the logic of the proposed method and how it works. Application case shows that the proposed RTSRA is both feasible and effective in managing construction project safety concerns.

  3. Real-Time Continuous Response Spectra Exceedance Calculation

    NASA Astrophysics Data System (ADS)

    Vernon, Frank; Harvey, Danny; Lindquist, Kent; Franke, Mathias

    2017-04-01

    A novel approach is presented for near real-time earthquake alarms for critical structures at distributed locations using real-time estimation of response spectra obtained from near free-field motions. Influential studies dating back to the 1980s identified spectral response acceleration as a key ground motion characteristic that correlates well with observed damage in structures. Thus, monitoring and reporting on exceedance of spectra-based thresholds are useful tools for assessing the potential for damage to facilities or multi-structure campuses based on input ground motions only. With as little as one strong-motion station per site, this scalable approach can provide rapid alarms on the damage status of remote towns, critical infrastructure (e.g., hospitals, schools) and points of interests (e.g., bridges) for a very large number of locations enabling better rapid decision making during critical and difficult immediate post-earthquake response actions. Real-time calculation of PSA exceedance and alarm dissemination are enabled with Bighorn, a module included in the Antelope software package that combines real-time spectral monitoring and alarm capabilities with a robust built-in web display server. Examples of response spectra from several M 5 events recorded by the ANZA seismic network in southern California will be presented.

  4. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    PubMed

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  5. Rnomads: An R Interface with the NOAA Operational Model Archive and Distribution System

    NASA Astrophysics Data System (ADS)

    Bowman, D. C.; Lees, J. M.

    2014-12-01

    The National Oceanic and Atmospheric Administration Operational Model Archive and Distribution System (NOMADS) facilitates rapid delivery of real time and archived environmental data sets from multiple agencies. These data are distributed free to the scientific community, industry, and the public. The rNOMADS package provides an interface between NOMADS and the R programming language. Like R itself, rNOMADS is open source and cross platform. It utilizes server-side functionality on the NOMADS system to subset model outputs for delivery to client R users. There are currently 57 real time and 10 archived models available through rNOMADS. Atmospheric models include the Global Forecast System and North American Mesoscale. Oceanic models include WAVEWATCH III and U. S. Navy Operational Global Ocean Model. rNOMADS has been downloaded 1700 times in the year since it was released. At the time of writing, it is being used for wind and solar power modeling, climate monitoring related to food security concerns, and storm surge/inundation calculations, among others. We introduce this new package and show how it can be used to extract data for infrasonic waveform modeling in the atmosphere.

  6. Real-time MR imaging of adeno-associated viral vector delivery to the primate brain

    PubMed Central

    Fiandaca, Massimo S.; Varenika, Vanja; Eberling, Jamie; McKnight, Tracy; Bringas, John; Pivirotto, Phillip; Beyer, Janine; Hadaczek, Piotr; Bowers, William; Park, John; Federoff, Howard; Forsayeth, John; Bankiewicz, Krystof S.

    2009-01-01

    We are developing a method for real-time magnetic resonance imaging (MRI) visualization of convection-enhanced delivery (CED) of adeno-associated viral vectors (AAV) to the primate brain. By including gadolinium-loaded liposomes (GDL) with AAV, we can track the convective movement of viral particles by continuous monitoring of distribution of surrogate GDL. In order to validate this approach, we infused two AAV (AAV1-GFP and AAV2-hAADC) into three different regions of non-human primate brain (corona radiata, putamen, and thalamus). The procedure was tolerated well by all three animals in the study. The distribution of GFP determined by immunohistochemistry in both brain regions correlated closely with distribution of GDL determined by MRI. Co-distribution was weaker with AAV2-hAADC, although in vivo PET scanning with FMT for AADC activity correlated well with immunohistochemistry of AADC. Although this is a relatively small study, it appears that AAV1 correlates better with MRI-monitored delivery than does AAV2. It seems likely that the difference in distribution may be due to differences in tissue specificity of the two serotypes. PMID:19095069

  7. Poisson-process generalization for the trading waiting-time distribution in a double-auction mechanism

    NASA Astrophysics Data System (ADS)

    Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico

    2005-05-01

    In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.

  8. Detection of individual atoms in helium buffer gas and observation of their real-time motion

    NASA Technical Reports Server (NTRS)

    Pan, C. L.; Prodan, J. V.; Fairbank, W. M., Jr.; She, C. Y.

    1980-01-01

    Single atoms are detected and their motion measured for the first time to our knowledge by the fluorescence photon-burst method in the presence of large quantities of buffer gas. A single-clipped digital correlator records the photon burst in real time and displays the atom's transit time across the laser beam. A comparison is made of the special requirements for single-atom detection in vacuum and in a buffer gas. Finally, the probability distribution of the bursts from many atoms is measured. It further proves that the bursts observed on resonance are due to single atoms and not simply to noise fluctuations.

  9. [Dynamic road vehicle emission inventory simulation study based on real time traffic information].

    PubMed

    Huang, Cheng; Liu, Juan; Chen, Chang-Hong; Zhang, Jian; Liu, Deng-Guo; Zhu, Jing-Yu; Huang, Wei-Ming; Chao, Yuan

    2012-11-01

    The vehicle activity survey, including traffic flow distribution, driving condition, and vehicle technologies, were conducted in Shanghai. The databases of vehicle flow, VSP distribution and vehicle categories were established according to the surveyed data. Based on this, a dynamic vehicle emission inventory simulation method was designed by using the real time traffic information data, such as traffic flow and average speed. Some roads in Shanghai city were selected to conduct the hourly vehicle emission simulation as a case study. The survey results show that light duty passenger car and taxi are major vehicles on the roads of Shanghai city, accounting for 48% - 72% and 15% - 43% of the total flow in each hour, respectively. VSP distribution has a good relationship with the average speed. The peak of VSP distribution tends to move to high load section and become lower with the increase of average speed. Vehicles achieved Euro 2 and Euro 3 standards are majorities of current vehicle population in Shanghai. Based on the calibration of vehicle travel mileage data, the proportions of Euro 2 and Euro 3 standard vehicles take up 11% - 70% and 17% - 51% in the real-world situation, respectively. The emission simulation results indicate that the ratios of emission peak and valley for the pollutants of CO, VOC, NO(x) and PM are 3.7, 4.6, 9.6 and 19.8, respectively. CO and VOC emissions mainly come from light-duty passenger car and taxi, which has a good relationship with the traffic flow. NO(x) and PM emissions are mainly from heavy-duty bus and public buses and mainly concentrate in the morning and evening peak hours. The established dynamic vehicle emission simulation method can reflect the change of actual road emission and output high emission road sectors and hours in real time. The method can provide an important technical means and decision-making basis for transportation environment management.

  10. High-resolution distributed temperature sensing with the multiphoton-timing technique

    NASA Astrophysics Data System (ADS)

    Höbel, M.; Ricka, J.; Wüthrich, M.; Binkert, Th.

    1995-06-01

    We report on a multiphoton-timing distributed temperature sensor (DTS) based on the concept of distributed anti-Stokes Raman thermometry. The sensor combines the advantage of very high spatial resolution (40 cm) with moderate measurement times. In 5 min it is possible to determine the temperature of as many as 4000 points along an optical fiber with an accuracy Delta T less than 2 deg C. The new feature of the DTS system is the combination of a fast single-photon avalanche diode with specially designed real-time signal-processing electronics. We discuss various parameters that affect the operation of analog and photon-timing DTS systems. Particular emphasis is put on the consequences of the nonideal behavior of sensor components and the corresponding correction procedures.

  11. FX-87 performance measurements: data-flow implementation. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammel, R.T.; Gifford, D.K.

    1988-11-01

    This report documents a series of experiments performed to explore the thesis that the FX-87 effect system permits a compiler to schedule imperative programs (i.e., programs that may contain side-effects) for execution on a parallel computer. The authors analyze how much the FX-87 static effect system can improve the execution times of five benchmark programs on a parallel graph interpreter. Three of their benchmark programs do not use side-effects (factorial, fibonacci, and polynomial division) and thus did not have any effect-induced constraints. Their FX-87 performance was comparable to their performance in a purely functional language. Two of the benchmark programsmore » use side effects (DNA sequence matching and Scheme interpretation) and the compiler was able to use effect information to reduce their execution times by factors of 1.7 to 5.4 when compared with sequential execution times. These results support the thesis that a static effect system is a powerful tool for compilation to multiprocessor computers. However, the graph interpreter we used was based on unrealistic assumptions, and thus our results may not accurately reflect the performance of a practical FX-87 implementation. The results also suggest that conventional loop analysis would complement the FX-87 effect system« less

  12. Real-Time Impact Visualization Inspection of Aerospace Composite Structures with Distributed Sensors.

    PubMed

    Si, Liang; Baier, Horst

    2015-07-08

    For the future design of smart aerospace structures, the development and application of a reliable, real-time and automatic monitoring and diagnostic technique is essential. Thus, with distributed sensor networks, a real-time automatic structural health monitoring (SHM) technique is designed and investigated to monitor and predict the locations and force magnitudes of unforeseen foreign impacts on composite structures and to estimate in real time mode the structural state when impacts occur. The proposed smart impact visualization inspection (IVI) technique mainly consists of five functional modules, which are the signal data preprocessing (SDP), the forward model generator (FMG), the impact positioning calculator (IPC), the inverse model operator (IMO) and structural state estimator (SSE). With regard to the verification of the practicality of the proposed IVI technique, various structure configurations are considered, which are a normal CFRP panel and another CFRP panel with "orange peel" surfaces and a cutout hole. Additionally, since robustness against several background disturbances is also an essential criterion for practical engineering demands, investigations and experimental tests are carried out under random vibration interfering noise (RVIN) conditions. The accuracy of the predictions for unknown impact events on composite structures using the IVI technique is validated under various structure configurations and under changing environmental conditions. The evaluated errors all fall well within a satisfactory limit range. Furthermore, it is concluded that the IVI technique is applicable for impact monitoring, diagnosis and assessment of aerospace composite structures in complex practical engineering environments.

  13. A uniform laminar air plasma plume with large volume excited by an alternating current voltage

    NASA Astrophysics Data System (ADS)

    Li, Xuechen; Bao, Wenting; Chu, Jingdi; Zhang, Panpan; Jia, Pengying

    2015-12-01

    Using a plasma jet composed of two needle electrodes, a laminar plasma plume with large volume is generated in air through an alternating current voltage excitation. Based on high-speed photography, a train of filaments is observed to propagate periodically away from their birth place along the gas flow. The laminar plume is in fact a temporal superposition of the arched filament train. The filament consists of a negative glow near the real time cathode, a positive column near the real time anode, and a Faraday dark space between them. It has been found that the propagation velocity of the filament increases with increasing the gas flow rate. Furthermore, the filament lifetime tends to follow a normal distribution (Gaussian distribution). The most probable lifetime decreases with increasing the gas flow rate or decreasing the averaged peak voltage. Results also indicate that the real time peak current decreases and the real time peak voltage increases with the propagation of the filament along the gas flow. The voltage-current curve indicates that, in every discharge cycle, the filament evolves from a Townsend discharge to a glow one and then the discharge quenches. Characteristic regions including a negative glow, a Faraday dark space, and a positive column can be discerned from the discharge filament. Furthermore, the plasma parameters such as the electron density, the vibrational temperature and the gas temperature are investigated based on the optical spectrum emitted from the laminar plume.

  14. Statistical Properties of Real-Time Amplitude Estimate of Harmonics Affected by Frequency Instability

    NASA Astrophysics Data System (ADS)

    Bellan, Diego; Pignari, Sergio A.

    2016-07-01

    This work deals with the statistical characterization of real-time digital measurement of the amplitude of harmonics affected by frequency instability. In fact, in modern power systems both the presence of harmonics and frequency instability are well-known and widespread phenomena mainly due to nonlinear loads and distributed generation, respectively. As a result, real-time monitoring of voltage/current frequency spectra is of paramount importance as far as power quality issues are addressed. Within this framework, a key point is that in many cases real-time continuous monitoring prevents the application of sophisticated algorithms to extract all the information from the digitized waveforms because of the required computational burden. In those cases only simple evaluations such as peak search of discrete Fourier transform are implemented. It is well known, however, that a slight change in waveform frequency results in lack of sampling synchronism and uncertainty in amplitude estimate. Of course the impact of this phenomenon increases with the order of the harmonic to be measured. In this paper an approximate analytical approach is proposed in order to describe the statistical properties of the measured magnitude of harmonics affected by frequency instability. By providing a simplified description of the frequency behavior of the windows used against spectral leakage, analytical expressions for mean value, variance, cumulative distribution function, and probability density function of the measured harmonics magnitude are derived in closed form as functions of waveform frequency treated as a random variable.

  15. Real-Time Impact Visualization Inspection of Aerospace Composite Structures with Distributed Sensors

    PubMed Central

    Si, Liang; Baier, Horst

    2015-01-01

    For the future design of smart aerospace structures, the development and application of a reliable, real-time and automatic monitoring and diagnostic technique is essential. Thus, with distributed sensor networks, a real-time automatic structural health monitoring (SHM) technique is designed and investigated to monitor and predict the locations and force magnitudes of unforeseen foreign impacts on composite structures and to estimate in real time mode the structural state when impacts occur. The proposed smart impact visualization inspection (IVI) technique mainly consists of five functional modules, which are the signal data preprocessing (SDP), the forward model generator (FMG), the impact positioning calculator (IPC), the inverse model operator (IMO) and structural state estimator (SSE). With regard to the verification of the practicality of the proposed IVI technique, various structure configurations are considered, which are a normal CFRP panel and another CFRP panel with “orange peel” surfaces and a cutout hole. Additionally, since robustness against several background disturbances is also an essential criterion for practical engineering demands, investigations and experimental tests are carried out under random vibration interfering noise (RVIN) conditions. The accuracy of the predictions for unknown impact events on composite structures using the IVI technique is validated under various structure configurations and under changing environmental conditions. The evaluated errors all fall well within a satisfactory limit range. Furthermore, it is concluded that the IVI technique is applicable for impact monitoring, diagnosis and assessment of aerospace composite structures in complex practical engineering environments. PMID:26184196

  16. Efficient scatter model for simulation of ultrasound images from computed tomography data

    NASA Astrophysics Data System (ADS)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  17. A Fault Oblivious Extreme-Scale Execution Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKie, Jim

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massivemore » data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations employing work stealing for load balancing that scaled to the largest existing supercomputers. Finally, we implemented the Elastic Building Blocks runtime, a library to manage object-oriented distributed software components. To support the research, we won two INCITE awards for time on Intrepid (BG/P) and Mira (BG/Q). Much of our work has had impact in the OS and runtime community through the ASCR Exascale OS/R workshop and report, leading to the research agenda of the Exascale OS/R program. Our project was, however, also affected by attrition of multiple PIs. While the PIs continued to participate and offer guidance as time permitted, losing these key individuals was unfortunate both for the project and for the DOE HPC community.« less

  18. Final Technical Report for Contract No. DE-EE0006332, "Integrated Simulation Development and Decision Support Tool-Set for Utility Market and Distributed Solar Power Generation"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormier, Dallas; Edra, Sherwin; Espinoza, Michael

    This project will enable utilities to develop long-term strategic plans that integrate high levels of renewable energy generation, and to better plan power system operations under high renewable penetration. The program developed forecast data streams for decision support and effective integration of centralized and distributed solar power generation in utility operations. This toolset focused on real time simulation of distributed power generation within utility grids with the emphasis on potential applications in day ahead (market) and real time (reliability) utility operations. The project team developed and demonstrated methodologies for quantifying the impact of distributed solar generation on core utility operations,more » identified protocols for internal data communication requirements, and worked with utility personnel to adapt the new distributed generation (DG) forecasts seamlessly within existing Load and Generation procedures through a sophisticated DMS. This project supported the objectives of the SunShot Initiative and SUNRISE by enabling core utility operations to enhance their simulation capability to analyze and prepare for the impacts of high penetrations of solar on the power grid. The impact of high penetration solar PV on utility operations is not only limited to control centers, but across many core operations. Benefits of an enhanced DMS using state-of-the-art solar forecast data were demonstrated within this project and have had an immediate direct operational cost savings for Energy Marketing for Day Ahead generation commitments, Real Time Operations, Load Forecasting (at an aggregate system level for Day Ahead), Demand Response, Long term Planning (asset management), Distribution Operations, and core ancillary services as required for balancing and reliability. This provided power system operators with the necessary tools and processes to operate the grid in a reliable manner under high renewable penetration.« less

  19. Real-time frequency-to-time mapping based on spectrally-discrete chromatic dispersion.

    PubMed

    Dai, Yitang; Li, Jilong; Zhang, Ziping; Yin, Feifei; Li, Wangzhe; Xu, Kun

    2017-07-10

    Traditional photonics-assisted real-time Fourier transform (RTFT) usually suffers from limited chromatic dispersion, huge volume, or large time delay and attendant loss. In this paper we propose frequency-to-time mapping (FTM) by spectrally-discrete dispersion to increase frequency sensitivity greatly. The novel media has periodic ON/OFF intensity frequency response while quadratic phase distribution along disconnected channels, which de-chirps matched optical input to repeated Fourier-transform-limited output. Real-time FTM is then obtained within each period. Since only discrete phase retardation rather than continuously-changed true time delay is required, huge equivalent dispersion is then available by compact device. Such FTM is theoretically analyzed, and implementation by cascaded optical ring resonators is proposed. After a numerical example, our theory is demonstrated by a proof-of-concept experiment, where a single loop containing 0.5-meters-long fiber is used. FTM under 400-MHz unambiguous bandwidth and 25-MHz resolution is reported. Highly-sensitive and linear mapping is achieved with 6.25 ps/MHz, equivalent to ~4.6 × 10 4 -km standard single mode fiber. Extended instantaneous bandwidth is expected by ring cascading. Our proposal may provide a promising method for real-time, low-latency Fourier transform.

  20. A distributed real-time model of degradation in a solid oxide fuel cell, part II: Analysis of fuel cell performance and potential failures

    NASA Astrophysics Data System (ADS)

    Zaccaria, V.; Tucker, D.; Traverso, A.

    2016-09-01

    Solid oxide fuel cells are characterized by very high efficiency, low emissions level, and large fuel flexibility. Unfortunately, their elevated costs and relatively short lifetimes reduce the economic feasibility of these technologies at the present time. Several mechanisms contribute to degrade fuel cell performance during time, and the study of these degradation modes and potential mitigation actions is critical to ensure the durability of the fuel cell and their long-term stability. In this work, localized degradation of a solid oxide fuel cell is modeled in real-time and its effects on various cell parameters are analyzed. Profile distributions of overpotential, temperature, heat generation, and temperature gradients in the stack are investigated during degradation. Several causes of failure could occur in the fuel cell if no proper control actions are applied. A local analysis of critical parameters conducted shows where the issues are and how they could be mitigated in order to extend the life of the cell.

  1. Research on human physiological parameters intelligent clothing based on distributed Fiber Bragg Grating

    NASA Astrophysics Data System (ADS)

    Miao, Changyun; Shi, Boya; Li, Hongqiang

    2008-12-01

    A human physiological parameters intelligent clothing is researched with FBG sensor technology. In this paper, the principles and methods of measuring human physiological parameters including body temperature and heart rate in intelligent clothing with distributed FBG are studied, the mathematical models of human physiological parameters measurement are built; the processing method of body temperature and heart rate detection signals is presented; human physiological parameters detection module is designed, the interference signals are filtered out, and the measurement accuracy is improved; the integration of the intelligent clothing is given. The intelligent clothing can implement real-time measurement, processing, storage and output of body temperature and heart rate. It has accurate measurement, portability, low cost, real-time monitoring, and other advantages. The intelligent clothing can realize the non-contact monitoring between doctors and patients, timely find the diseases such as cancer and infectious diseases, and make patients get timely treatment. It has great significance and value for ensuring the health of the elders and the children with language dysfunction.

  2. ControlShell: A real-time software framework

    NASA Technical Reports Server (NTRS)

    Schneider, Stanley A.; Chen, Vincent W.; Pardo-Castellote, Gerardo

    1994-01-01

    The ControlShell system is a programming environment that enables the development and implementation of complex real-time software. It includes many building tools for complex systems, such as a graphical finite state machine (FSM) tool to provide strategic control. ControlShell has a component-based design, providing interface definitions and mechanisms for building real-time code modules along with providing basic data management. Some of the system-building tools incorporated in ControlShell are a graphical data flow editor, a component data requirement editor, and a state-machine editor. It also includes a distributed data flow package, an execution configuration manager, a matrix package, and an object database and dynamic binding facility. This paper presents an overview of ControlShell's architecture and examines the functions of several of its tools.

  3. EOS: A project to investigate the design and construction of real-time distributed Embedded Operating Systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Essick, Ray B.; Johnston, Gary; Kenny, Kevin; Russo, Vince

    1987-01-01

    Project EOS is studying the problems of building adaptable real-time embedded operating systems for the scientific missions of NASA. Choices (A Class Hierarchical Open Interface for Custom Embedded Systems) is an operating system designed and built by Project EOS to address the following specific issues: the software architecture for adaptable embedded parallel operating systems, the achievement of high-performance and real-time operation, the simplification of interprocess communications, the isolation of operating system mechanisms from one another, and the separation of mechanisms from policy decisions. Choices is written in C++ and runs on a ten processor Encore Multimax. The system is intended for use in constructing specialized computer applications and research on advanced operating system features including fault tolerance and parallelism.

  4. Noninvasive and Real-Time Plasmon Waveguide Resonance Thermometry

    PubMed Central

    Zhang, Pengfei; Liu, Le; He, Yonghong; Zhou, Yanfei; Ji, Yanhong; Ma, Hui

    2015-01-01

    In this paper, the noninvasive and real-time plasmon waveguide resonance (PWR) thermometry is reported theoretically and demonstrated experimentally. Owing to the enhanced evanescent field and thermal shield effect of its dielectric layer, a PWR thermometer permits accurate temperature sensing and has a wide dynamic range. A temperature measurement sensitivity of 9.4 × 10−3 °C is achieved and the thermo optic coefficient nonlinearity is measured in the experiment. The measurement of water cooling processes distributed in one dimension reveals that a PWR thermometer allows real-time temperature sensing and has potential to be applied for thermal gradient analysis. Apart from this, the PWR thermometer has the advantages of low cost and simple structure, since our transduction scheme can be constructed with conventional optical components and commercial coating techniques. PMID:25871718

  5. Real-time passive acoustic detection of marine mammals from a variety of autonomous platforms

    NASA Astrophysics Data System (ADS)

    Baumgartner, M.; Van Parijs, S. M.; Hotchkin, C. F.; Gurnee, J.; Stafford, K.; Winsor, P.; Davies, K. T. A.; Taggart, C. T.

    2016-02-01

    Over the past two decades, passive acoustic monitoring has proven to be an effective means of estimating the occurrence of marine mammals. The vast majority of applications involve archival recordings from bottom-mounted instruments or towed hydrophones from moving ships; however, there is growing interest in assessing marine mammal occurrence from autonomous platforms, particularly in real time. The Woods Hole Oceanographic Institution has developed the capability to detect, classify, and remotely report in near real time the calls of marine mammals via passive acoustics from a variety of autonomous platforms, including Slocum gliders, wave gliders, and moored buoys. The mobile Slocum glider can simultaneously measure marine mammal occurrence and oceanographic conditions throughout the water column, making it well suited for studying both marine mammal distribution and habitat. Wave gliders and moored buoys provide complementary observations over much larger spatial scales and longer temporal scales, respectively. The near real-time reporting capability of these platforms enables follow-up visual observations, on-water research, or responsive management action. We have recently begun to use this technology to regularly monitor baleen whales off the coast of New England, USA and Nova Scotia, Canada, as well as baleen whales, beluga whales, and bearded seals in the Chukchi Sea off the northwest coast of Alaska, USA. Our long-range goal is to monitor occurrence over wide spatial and temporal extents as part of the regional and global ocean observatory initiatives to improve marine mammal conservation and management and to study changes in marine mammal distribution over multi-annual time scales in response to climate change.

  6. Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-07-15

    In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper's capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determinedmore » for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency ({eta} {approx} 20%) and dark count probability (p{sub dark} {approx} 10{sup -7})« less

  7. Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.

    2008-07-01

    In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper’s capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determined for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency (η ≈ 20%) and dark count probability ( p dark ˜ 10-7).

  8. Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks

    NASA Astrophysics Data System (ADS)

    Karpov, Kirill; Fedotova, Irina; Siemens, Eduard

    2017-07-01

    In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.

  9. Autonomous watersheds: Reducing flooding and stream erosion through real-time control

    NASA Astrophysics Data System (ADS)

    Kerkez, B.; Wong, B. P.

    2017-12-01

    We introduce an analytical toolchain, based on dynamical system theory and feedback control, to determine how many control points (valves, gates, pumps, etc.) are needed to transform urban watersheds from static to adaptive. Advances and distributed sensing and control stand to fundamentally change how we manage urban watersheds. In lieu of new and costly infrastructure, the real-time control of stormwater systems will reduce flooding, mitigate stream erosion, and improve the treatment of polluted runoff. We discuss the how open source technologies, in the form of wireless sensor nodes and remotely-controllable valves (open-storm.org), have been deployed to build "smart" stormwater systems in the Midwestern US. Unlike "static" infrastructure, which cannot readily adapt to changing inputs and land uses, these distributed control assets allow entire watersheds to be reconfigured on a storm-by-storm basis. Our results show how the control of even just a few valves within urban catchments (1-10km^2) allows for the real-time "shaping" of hydrographs, which reduces downstream erosion and flooding. We also introduce an equivalence framework that can be used by decision-makers to objectively compare investments into "smart" system to more traditional solutions, such as gray and green stormwater infrastructure.

  10. Substation Reactive Power Regulation Strategy

    NASA Astrophysics Data System (ADS)

    Zhang, Junfeng; Zhang, Chunwang; Ma, Daqing

    2018-01-01

    With the increasing requirements on the power supply quality and reliability of distribution network, voltage and reactive power regulation of substations has become one of the indispensable ways to ensure voltage quality and reactive power balance and to improve the economy and reliability of distribution network. Therefore, it is a general concern of the current power workers and operators that what kind of flexible and effective control method should be used to adjust the on-load tap-changer (OLTC) transformer and shunt compensation capacitor in a substation to achieve reactive power balance in situ, improve voltage pass rate, increase power factor and reduce active power loss. In this paper, based on the traditional nine-zone diagram and combining with the characteristics of substation, a fuzzy variable-center nine-zone diagram control method is proposed and used to make a comprehensive regulation of substation voltage and reactive power. Through the calculation and simulation of the example, this method is proved to have satisfactorily reconciled the contradiction between reactive power and voltage in real-time control and achieved the basic goal of real-time control of the substation, providing a reference value to the practical application of the substation real-time control method.

  11. Online decoding of object-based attention using real-time fMRI.

    PubMed

    Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J

    2014-01-01

    Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Load sharing in distributed real-time systems with state-change broadcasts

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Chang, Yi-Chieh

    1989-01-01

    A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.

  13. Detection of distribution of avian influenza H5N1 virus by immunohistochemistry, chromogenic in situ hybridization and real-time PCR techniques in experimentally infected chickens.

    PubMed

    Chamnanpood, Chanpen; Sanguansermsri, Donruedee; Pongcharoen, Sutatip; Sanguansermsri, Phanchana

    2011-03-01

    Ten specific pathogen free (SPF) chickens were inoculated intranasally with avian influenza virus subtype H5N1. Evaluation revealed distribution of the virus in twelve organs: liver, intestine, bursa, lung, trachea, thymus, heart, pancreas, brain, spleen, kidney, and esophagus. Immunohistochemistry (IHC), chromogenic in situ hybridization (CISH), and real-time polymerase chain reaction (PCR) were developed and compared for detection of the virus from the organs. The distribution of avian influenza H5N1 in chickens varied by animal and detecting technique. The heart, kidneys, intestines, lungs, and pancreas were positive with all three techniques, while the others varied by techique. The three techniques can be used to detect avian influenza effectively, but the pros and cons of each technique need to be determined. The decision of which technique to use depends on the objective of the examination, budget, type and quality of samples, laboratory facilities and technician skills.

  14. Fermilab Muon Campus g-2 Cryogenic Distribution Remote Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, L.; Theilacker, J.; Klebaner, A.

    2015-11-05

    The Muon Campus (MC) is able to measure Muon g-2 with high precision and comparing its value to the theoretical prediction. The MC has four 300 KW screw compressors and four liquid helium refrigerators. The centerpiece of the Muon g-2 experiment at Fermilab is a large, 50-foot-diameter superconducting muon storage ring. This one-of-a-kind ring, made of steel, aluminum and superconducting wire, was built for the previous g-2 experiment at Brookhaven. Due to each subsystem has to be far away from each other and be placed in the distant location, therefore, Siemens Process Control System PCS7-400, Automation Direct DL205 & DL05more » PLC, Synoptic and Fermilab ACNET HMI are the ideal choices as the MC g-2 cryogenic distribution real-time and on-Line remote control system. This paper presents a method which has been successfully used by many Fermilab distribution cryogenic real-time and On-Line remote control systems.« less

  15. Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.

    2013-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.

  16. Automated Collection of Real-Time Alerts of Citizens as a Useful Tool to Continuously Monitor Malodorous Emissions.

    PubMed

    Brattoli, Magda; Mazzone, Antonio; Giua, Roberto; Assennato, Giorgio; de Gennaro, Gianluigi

    2016-02-26

    The evaluation of odor emissions and dispersion is a very arduous topic to face; the real-time monitoring of odor emissions, the identification of chemical components and, with proper certainty, the source of annoyance represent a challenge for stakeholders such as local authorities. The complaints of people, often not systematic and variously distributed, in general do not allow us to quantify the perceived annoyance. Experimental research has been performed to detect and evaluate olfactory annoyance, based on field testing of an innovative monitoring methodology grounded in automatic recording of citizen alerts. It has been applied in Taranto, in the south of Italy where a relevant industrial area is located, by using Odortel(®) for automated collection of citizen alerts. To evaluate its reliability, the collection system has been integrated with automated samplers, able to sample odorous air in real time, according to the citizen alerts of annoyance and, moreover, with meteorological data (especially the wind direction) and trends in odor marker compounds, recorded by air quality monitoring stations. The results have allowed us, for the first time, to manage annoyance complaints, test their reliability, and obtain information about the distribution and entity of the odor phenomena, such that we were able to identify, with supporting evidence, the source as an oil refinery plant.

  17. BIO-Plex Information System Concept

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)

    1999-01-01

    This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.

  18. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.

  19. Southern California Seismic Network: New Design and Implementation of Redundant and Reliable Real-time Data Acquisition Systems

    NASA Astrophysics Data System (ADS)

    Saleh, T.; Rico, H.; Solanki, K.; Hauksson, E.; Friberg, P.

    2005-12-01

    The Southern California Seismic Network (SCSN) handles more than 2500 high-data rate channels from more than 380 seismic stations distributed across southern California. These data are imported real-time from dataloggers, earthworm hubs, and partner networks. The SCSN also exports data to eight different partner networks. Both the imported and exported data are critical for emergency response and scientific research. Previous data acquisition systems were complex and difficult to operate, because they grew in an ad hoc fashion to meet the increasing needs for distributing real-time waveform data. To maximize reliability and redundancy, we apply best practices methods from computer science for implementing the software and hardware configurations for import, export, and acquisition of real-time seismic data. Our approach makes use of failover software designs, methods for dividing labor diligently amongst the network nodes, and state of the art networking redundancy technologies. To facilitate maintenance and daily operations we seek to provide some separation between major functions such as data import, export, acquisition, archiving, real-time processing, and alarming. As an example, we make waveform import and export functions independent by operating them on separate servers. Similarly, two independent servers provide waveform export, allowing data recipients to implement their own redundancy. The data import is handled differently by using one primary server and a live backup server. These data import servers, run fail-over software that allows automatic role switching in case of failure from primary to shadow. Similar to the classic earthworm design, all the acquired waveform data are broadcast onto a private network, which allows multiple machines to acquire and process the data. As we separate data import and export away from acquisition, we are also working on new approaches to separate real-time processing and rapid reliable archiving of real-time data. Further, improved network security is an integral part of the new design. Redundant firewalls will provide secure data imports, exports, and acquisition as well as DMZ zones for web servers and other publicly available servers. We will present the detailed design of this new configuration that is currently being implemented by the SCSN at Caltech. The design principals are general enough to be of use to most regional seismic networks.

  20. Feasibility of real-time echocardiographic evaluation during patient transport.

    PubMed

    Garrett, Paul D; Boyd, Sheri Y N; Bauch, Terry D; Rubal, Bernard J; Bulgrin, James R; Kinkler, E Sterling

    2003-03-01

    Echocardiography is a key diagnostic tool in evaluating patients with cardiac emergencies and chest trauma. The lack of qualified real-time interpretation limits its use by emergency first responders. Early diagnosis of cardiac emergencies has the potential to facilitate triage and medical intervention to improve outcomes. We investigated the feasibility of remote, real-time interpretation of echocardiograms during patient transport. Echocardiograms using a hand-carried ultrasound device were transmitted from an ambulance in transit to a tertiary care facility using a distributed mobile local area network. Transmitted studies were reviewed by a cardiologist for ability to interpret predefined features. Transmission quality and reliability were assessed. Echocardiographic images were successfully transmitted greater than 88% of transport time. The evaluation of left-ventricular size and function, and presence of pericardial effusion were greater than 90% concordant, but only 66% of all echocardiographic features were concordant. Most transmission losses were brief (

Top