Sample records for dynamic coprocessor management

  1. Optimizing legacy molecular dynamics software with directive-based offload

    NASA Astrophysics Data System (ADS)

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-10-01

    Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel® Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.

  2. Optimizing legacy molecular dynamics software with directive-based offload

    DOE PAGES

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; ...

    2015-05-14

    The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less

  3. Unobtrusive Software and System Health Management with R2U2 on a Parallel MIMD Coprocessor

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Moosbrugger, Patrick

    2017-01-01

    Dynamic monitoring of software and system health of a complex cyber-physical system requires observers that continuously monitor variables of the embedded software in order to detect anomalies and reason about root causes. There exists a variety of techniques for code instrumentation, but instrumentation might change runtime behavior and could require costly software re-certification. In this paper, we present R2U2E, a novel realization of our real-time, Realizable, Responsive, and Unobtrusive Unit (R2U2). The R2U2E observers are executed in parallel on a dedicated 16-core EPIPHANY co-processor, thereby avoiding additional computational overhead to the system under observation. A DMA-based shared memory access architecture allows R2U2E to operate without any code instrumentation or program interference.

  4. SHAMROCK: A Synthesizable High Assurance Cryptography and Key Management Coprocessor

    DTIC Science & Technology

    2016-11-01

    and excluding devices from a communicating group as they become trusted, or untrusted. An example of using rekeying to dynamically adjust group...algorithms, such as the Elliptic Curve Digital Signature Algorithm (ECDSA), work by computing a cryptographic hash of a message using, for example , the...material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8721- 05-C

  5. Extension of the AMBER molecular dynamics software to Intel's Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Needham, Perri J.; Bhuiyan, Ashraf; Walker, Ross C.

    2016-04-01

    We present an implementation of explicit solvent particle mesh Ewald (PME) classical molecular dynamics (MD) within the PMEMD molecular dynamics engine, that forms part of the AMBER v14 MD software package, that makes use of Intel Xeon Phi coprocessors by offloading portions of the PME direct summation and neighbor list build to the coprocessor. We refer to this implementation as pmemd MIC offload and in this paper present the technical details of the algorithm, including basic models for MPI and OpenMP configuration, and analyze the resultant performance. The algorithm provides the best performance improvement for large systems (>400,000 atoms), achieving a ∼35% performance improvement for satellite tobacco mosaic virus (1,067,095 atoms) when 2 Intel E5-2697 v2 processors (2 ×12 cores, 30M cache, 2.7 GHz) are coupled to an Intel Xeon Phi coprocessor (Model 7120P-1.238/1.333 GHz, 61 cores). The implementation utilizes a two-fold decomposition strategy: spatial decomposition using an MPI library and thread-based decomposition using OpenMP. We also present compiler optimization settings that improve the performance on Intel Xeon processors, while retaining simulation accuracy.

  6. Vectorization for Molecular Dynamics on Intel Xeon Phi Corpocessors

    NASA Astrophysics Data System (ADS)

    Yi, Hongsuk

    2014-03-01

    Many modern processors are capable of exploiting data-level parallelism through the use of single instruction multiple data (SIMD) execution. The new Intel Xeon Phi coprocessor supports 512 bit vector registers for the high performance computing. In this paper, we have developed a hierarchical parallelization scheme for accelerated molecular dynamics simulations with the Terfoff potentials for covalent bond solid crystals on Intel Xeon Phi coprocessor systems. The scheme exploits multi-level parallelism computing. We combine thread-level parallelism using a tightly coupled thread-level and task-level parallelism with 512-bit vector register. The simulation results show that the parallel performance of SIMD implementations on Xeon Phi is apparently superior to their x86 CPU architecture.

  7. Low-power cryptographic coprocessor for autonomous wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Olszyna, Jakub; Winiecki, Wiesław

    2013-10-01

    The concept of autonomous wireless sensor networks involves energy harvesting, as well as effective management of system resources. Public-key cryptography (PKC) offers the advantage of elegant key agreement schemes with which a secret key can be securely established over unsecure channels. In addition to solving the key management problem, the other major application of PKC is digital signatures, with which non-repudiation of messages exchanges can be achieved. The motivation for studying low-power and area efficient modular arithmetic algorithms comes from enabling public-key security for low-power devices that can perform under constrained environment like autonomous wireless sensor networks. This paper presents a cryptographic coprocessor tailored to the autonomous wireless sensor networks constraints. Such hardware circuit is aimed to support the implementation of different public-key cryptosystems based on modular arithmetic in GF(p) and GF(2m). Key components of the coprocessor are described as GEZEL models and can be easily transformed to VHDL and implemented in hardware.

  8. Chapter 13. Exploring Use of the Reserved Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmen, John; Humphrey, Alan; Berzins, Martin

    2015-07-29

    In this chapter, we illustrate benefits of thinking in terms of thread management techniques when using a centralized scheduler model along with interoperability of MPI and PThread. This is facilitated through an exploration of thread placement strategies for an algorithm modeling radiative heat transfer with special attention to the 61st core. This algorithm plays a key role within the Uintah Computational Framework (UCF) and current efforts taking place at the University of Utah to model next-generation, large-scale clean coal boilers. In such simulations, this algorithm models the dominant form of heat transfer and consumes a large portion of compute time.more » Exemplified by a real-world example, this chapter presents our early efforts in porting a key portion of a scalability-centric codebase to the Intel Xeon Phi coprocessor. Specifically, this chapter presents results from our experiments profiling the native execution of a reverse Monte-Carlo ray tracing-based radiation model on a single coprocessor. These results demonstrate that our fastest run configurations utilized the 61st core and that performance was not profoundly impacted when explicitly oversubscribing the coprocessor operating system thread. Additionally, this chapter presents a portion of radiation model source code, a MIC-centric UCF cross-compilation example, and less conventional thread management technique for developers utilizing the PThreads threading model.« less

  9. Optimizing meridional advection of the Advanced Research WRF (ARW) dynamics for Intel Xeon Phi coprocessor

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    The most widely used community weather forecast and research model in the world is the Weather Research and Forecast (WRF) model. Two distinct varieties of WRF exist. The one we are interested is the Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we optimize a meridional (north-south direction) advection subroutine for Intel Xeon Phi coprocessor. Advection is of the most time consuming routines in the ARW dynamics core. It advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.2x.

  10. Toward a formal verification of a floating-point coprocessor and its composition with a central processing unit

    NASA Technical Reports Server (NTRS)

    Pan, Jing; Levitt, Karl N.; Cohen, Gerald C.

    1991-01-01

    Discussed here is work to formally specify and verify a floating point coprocessor based on the MC68881. The HOL verification system developed at Cambridge University was used. The coprocessor consists of two independent units: the bus interface unit used to communicate with the cpu and the arithmetic processing unit used to perform the actual calculation. Reasoning about the interaction and synchronization among processes using higher order logic is demonstrated.

  11. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    NASA Astrophysics Data System (ADS)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.

  12. Coprocessors for quantum devices

    NASA Astrophysics Data System (ADS)

    Kay, Alastair

    2018-03-01

    Quantum devices, from simple fixed-function tools to the ultimate goal of a universal quantum computer, will require high-quality, frequent repetition of a small set of core operations, such as the preparation of entangled states. These tasks are perfectly suited to realization by a coprocessor or supplementary instruction set, as is common practice in modern CPUs. In this paper, we present two quintessentially quantum coprocessor functions: production of a Greenberger-Horne-Zeilinger state and implementation of optimal universal (asymmetric) quantum cloning. Both are based on the evolution of a fixed Hamiltonian. We introduce a technique for deriving the parameters of these Hamiltonians based on the numerical integration of Toda-like flows.

  13. Acceleration of boundary element method for linear elasticity

    NASA Astrophysics Data System (ADS)

    Zapletal, Jan; Merta, Michal; Čermák, Martin

    2017-07-01

    In this work we describe the accelerated assembly of system matrices for the boundary element method using the Intel Xeon Phi coprocessors. We present a model problem, provide a brief overview of its discretization and acceleration of the system matrices assembly using the coprocessors, and test the accelerated version using a numerical benchmark.

  14. DFT algorithms for bit-serial GaAs array processor architectures

    NASA Technical Reports Server (NTRS)

    Mcmillan, Gary B.

    1988-01-01

    Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.

  15. An object-oriented, coprocessor-accelerated model for ice sheet simulations

    NASA Astrophysics Data System (ADS)

    Seddik, H.; Greve, R.

    2013-12-01

    Recently, numerous models capable of modeling the thermo-dynamics of ice sheets have been developed within the ice sheet modeling community. Their capabilities have been characterized by a wide range of features with different numerical methods (finite difference or finite element), different implementations of the ice flow mechanics (shallow-ice, higher-order, full Stokes) and different treatments for the basal and coastal areas (basal hydrology, basal sliding, ice shelves). Shallow-ice models (SICOPOLIS, IcIES, PISM, etc) have been widely used for modeling whole ice sheets (Greenland and Antarctica) due to the relatively low computational cost of the shallow-ice approximation but higher order (ISSM, AIF) and full Stokes (Elmer/Ice) models have been recently used to model the Greenland ice sheet. The advance in processor speed and the decrease in cost for accessing large amount of memory and storage have undoubtedly been the driving force in the commoditization of models with higher capabilities, and the popularity of Elmer/Ice (http://elmerice.elmerfem.com) with an active user base is a notable representation of this trend. Elmer/Ice is a full Stokes model built on top of the multi-physics package Elmer (http://www.csc.fi/english/pages/elmer) which provides the full machinery for the complex finite element procedure and is fully parallel (mesh partitioning with OpenMPI communication). Elmer is mainly written in Fortran 90 and targets essentially traditional processors as the code base was not initially written to run on modern coprocessors (yet adding support for the recently introduced x86 based coprocessors is possible). Furthermore, a truly modular and object-oriented implementation is required for quick adaptation to fast evolving capabilities in hardware (Fortran 2003 provides an object-oriented programming model while not being clean and requiring a tricky refactoring of Elmer code). In this work, the object-oriented, coprocessor-accelerated finite element code Sainou is introduced. Sainou is an Elmer fork which is reimplemented in Objective C and used for experimenting with ice sheet models running on coprocessors, essentially GPU devices. GPUs are highly parallel processors that provide opportunities for fine-grained parallelization of the full Stokes problem using the standard OpenCL language (http://www.khronos.org/opencl/) to access the device. Sainou is built upon a collection of Objective C base classes that service a modular kernel (itself a base class) which provides the core methods to solve the finite element problem. An early implementation of Sainou will be presented with emphasis on the object architecture and the strategies of parallelizations. The computation of a simple heat conduction problem is used to test the implementation which also provides experimental support for running the global matrix assembly on GPU.

  16. Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.

    PubMed

    Wilkinson, Karl; Skylaris, Chris-Kriton

    2013-10-30

    We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. Copyright © 2013 Wiley Periodicals, Inc.

  17. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  18. Structural Dynamics of Maneuvering Aircraft.

    DTIC Science & Technology

    1987-09-01

    MANDYN. Written in Fortran 77, it was compiled and executed with Microsoft Fortran, Vers. 4.0 on an IBM PC-AT, with a co-processor, and a 20M hard disk...to the pivot area. Pre- sumably, the pivot area is a hard point in the wing structure. -41- NADC M1i4-0 ResulIts The final mass and flexural rigidity...lowest mode) is an important parameter. If it is less than three, the load factor approach can be problema - tical. In assessing the effect of one maneuver

  19. A programmable controller based on CAN field bus embedded microprocessor and FPGA

    NASA Astrophysics Data System (ADS)

    Cai, Qizhong; Guo, Yifeng; Chen, Wenhei; Wang, Mingtao

    2008-10-01

    One kind of new programmable controller(PLC) is introduced in this paper. The advanced embedded microprocessor and Field-Programmable Gate Array (FPGA) device are applied in the PLC system. The PLC system structure was presented in this paper. It includes 32 bits Advanced RISC Machines (ARM) embedded microprocessor as control core, FPGA as control arithmetic coprocessor and CAN bus as data communication criteria protocol connected the host controller and its various extension modules. It is detailed given that the circuits and working principle, IiO interface circuit between ARM and FPGA and interface circuit between ARM and FPGA coprocessor. Furthermore the interface circuit diagrams between various modules are written. In addition, it is introduced that ladder chart program how to control the transfer info of control arithmetic part in FPGA coprocessor. The PLC, through nearly two months of operation to meet the design of the basic requirements.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luszczek, Piotr R; Tomov, Stanimire Z; Dongarra, Jack J

    We present an efficient and scalable programming model for the development of linear algebra in heterogeneous multi-coprocessor environments. The model incorporates some of the current best design and implementation practices for the heterogeneous acceleration of dense linear algebra (DLA). Examples are given as the basis for solving linear systems' algorithms - the LU, QR, and Cholesky factorizations. To generate the extreme level of parallelism needed for the efficient use of coprocessors, algorithms of interest are redesigned and then split into well-chosen computational tasks. The tasks execution is scheduled over the computational components of a hybrid system of multi-core CPUs andmore » coprocessors using a light-weight runtime system. The use of lightweight runtime systems keeps scheduling overhead low, while enabling the expression of parallelism through otherwise sequential code. This simplifies the development efforts and allows the exploration of the unique strengths of the various hardware components.« less

  1. Accelerating gravitational microlensing simulations using the Xeon Phi coprocessor

    NASA Astrophysics Data System (ADS)

    Chen, B.; Kantowski, R.; Dai, X.; Baron, E.; Van der Mark, P.

    2017-04-01

    Recently Graphics Processing Units (GPUs) have been used to speed up very CPU-intensive gravitational microlensing simulations. In this work, we use the Xeon Phi coprocessor to accelerate such simulations and compare its performance on a microlensing code with that of NVIDIA's GPUs. For the selected set of parameters evaluated in our experiment, we find that the speedup by Intel's Knights Corner coprocessor is comparable to that by NVIDIA's Fermi family of GPUs with compute capability 2.0, but less significant than GPUs with higher compute capabilities such as the Kepler. However, the very recently released second generation Xeon Phi, Knights Landing, is about 5.8 times faster than the Knights Corner, and about 2.9 times faster than the Kepler GPU used in our simulations. We conclude that the Xeon Phi is a very promising alternative to GPUs for modern high performance microlensing simulations.

  2. Fast 2D FWI on a multi and many-cores workstation.

    NASA Astrophysics Data System (ADS)

    Thierry, Philippe; Donno, Daniela; Noble, Mark

    2014-05-01

    Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16 GB of memory available on the co-processor we are able to keep wavefields in memory to achieve the gradient computation by cross-correlation of forward and back-propagated wavefields needed by our time-domain FWI scheme, without heavy traffic on the i/o subsystem and PCIe bus. In this presentation we will also review some simple methodologies to determine performance expectation compared to real performances in order to get optimization effort estimation before starting any huge modification or rewriting of research codes. The key message is the ease of use and development of this hybrid configuration to reach not the absolute peak performance value but the optimal one that ensures the best balance between geophysical and computer developments.

  3. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    PubMed

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  4. HECWRC, Flood Flow Frequency Analysis Computer Program 723-X6-L7550

    DTIC Science & Technology

    1989-02-14

    AGENCY NAME AND ADDRESS, ORDER NO., ETC. (1 NTS sells, leave blank) 11. PRICE INFORMA-ION Price includes documentation: Price code: DO1 $50.00 12 ...required is 256 K. Math coprocessor (8087/80287/80387) is highly recommended but not required. 16. DATA FILE TECHNICAL DESCRIPTION The software is...disk drive (360 KB or 1.2 MB). A 10 MB or larger hard disk is recommended. Math coprocessor (8087/80287/80387) is highly recommended but not renuired

  5. Implementing Legacy-C Algorithms in FPGA Co-Processors for Performance Accelerated Smart Payloads

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.; Hartzell, Christine

    2008-01-01

    Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.

  6. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  7. Evaluating the transport layer of the ALFA framework for the Intel® Xeon Phi™ Coprocessor

    NASA Astrophysics Data System (ADS)

    Santogidis, Aram; Hirstius, Andreas; Lalis, Spyros

    2015-12-01

    The ALFA framework supports the software development of major High Energy Physics experiments. As part of our research effort to optimize the transport layer of ALFA, we focus on profiling its data transfer performance for inter-node communication on the Intel Xeon Phi Coprocessor. In this article we present the collected performance measurements with the related analysis of the results. The optimization opportunities that are discovered, help us to formulate the future plans of enabling high performance data transfer for ALFA on the Intel Xeon Phi architecture.

  8. Parallel Mutual Information Based Construction of Genome-Scale Networks on the Intel® Xeon Phi™ Coprocessor.

    PubMed

    Misra, Sanchit; Pamnany, Kiran; Aluru, Srinivas

    2015-01-01

    Construction of whole-genome networks from large-scale gene expression data is an important problem in systems biology. While several techniques have been developed, most cannot handle network reconstruction at the whole-genome scale, and the few that can, require large clusters. In this paper, we present a solution on the Intel Xeon Phi coprocessor, taking advantage of its multi-level parallelism including many x86-based cores, multiple threads per core, and vector processing units. We also present a solution on the Intel® Xeon® processor. Our solution is based on TINGe, a fast parallel network reconstruction technique that uses mutual information and permutation testing for assessing statistical significance. We demonstrate the first ever inference of a plant whole genome regulatory network on a single chip by constructing a 15,575 gene network of the plant Arabidopsis thaliana from 3,137 microarray experiments in only 22 minutes. In addition, our optimization for parallelizing mutual information computation on the Intel Xeon Phi coprocessor holds out lessons that are applicable to other domains.

  9. Unified Compact ECC-AES Co-Processor with Group-Key Support for IoT Devices in Wireless Sensor Networks

    PubMed Central

    Castillo, Encarnación; López-Ramos, Juan A.; Morales, Diego P.

    2018-01-01

    Security is a critical challenge for the effective expansion of all new emerging applications in the Internet of Things paradigm. Therefore, it is necessary to define and implement different mechanisms for guaranteeing security and privacy of data interchanged within the multiple wireless sensor networks being part of the Internet of Things. However, in this context, low power and low area are required, limiting the resources available for security and thus hindering the implementation of adequate security protocols. Group keys can save resources and communications bandwidth, but should be combined with public key cryptography to be really secure. In this paper, a compact and unified co-processor for enabling Elliptic Curve Cryptography along to Advanced Encryption Standard with low area requirements and Group-Key support is presented. The designed co-processor allows securing wireless sensor networks with independence of the communications protocols used. With an area occupancy of only 2101 LUTs over Spartan 6 devices from Xilinx, it requires 15% less area while achieving near 490% better performance when compared to cryptoprocessors with similar features in the literature. PMID:29337921

  10. Unified Compact ECC-AES Co-Processor with Group-Key Support for IoT Devices in Wireless Sensor Networks.

    PubMed

    Parrilla, Luis; Castillo, Encarnación; López-Ramos, Juan A; Álvarez-Bermejo, José A; García, Antonio; Morales, Diego P

    2018-01-16

    Security is a critical challenge for the effective expansion of all new emerging applications in the Internet of Things paradigm. Therefore, it is necessary to define and implement different mechanisms for guaranteeing security and privacy of data interchanged within the multiple wireless sensor networks being part of the Internet of Things. However, in this context, low power and low area are required, limiting the resources available for security and thus hindering the implementation of adequate security protocols. Group keys can save resources and communications bandwidth, but should be combined with public key cryptography to be really secure. In this paper, a compact and unified co-processor for enabling Elliptic Curve Cryptography along to Advanced Encryption Standard with low area requirements and Group-Key support is presented. The designed co-processor allows securing wireless sensor networks with independence of the communications protocols used. With an area occupancy of only 2101 LUTs over Spartan 6 devices from Xilinx, it requires 15% less area while achieving near 490% better performance when compared to cryptoprocessors with similar features in the literature.

  11. Quantum-classical interface based on single flux quantum digital logic

    NASA Astrophysics Data System (ADS)

    McDermott, R.; Vavilov, M. G.; Plourde, B. L. T.; Wilhelm, F. K.; Liebermann, P. J.; Mukhanov, O. A.; Ohki, T. A.

    2018-04-01

    We describe an approach to the integrated control and measurement of a large-scale superconducting multiqubit array comprising up to 108 physical qubits using a proximal coprocessor based on the Single Flux Quantum (SFQ) digital logic family. Coherent control is realized by irradiating the qubits directly with classical bitstreams derived from optimal control theory. Qubit measurement is performed by a Josephson photon counter, which provides access to the classical result of projective quantum measurement at the millikelvin stage. We analyze the power budget and physical footprint of the SFQ coprocessor and discuss challenges and opportunities associated with this approach.

  12. FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.

    PubMed

    Zierke, Stephanie; Bakos, Jason D

    2010-04-12

    Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).

  13. Kernel optimization for short-range molecular dynamics

    NASA Astrophysics Data System (ADS)

    Hu, Changjun; Wang, Xianmeng; Li, Jianjiang; He, Xinfu; Li, Shigang; Feng, Yangde; Yang, Shaofeng; Bai, He

    2017-02-01

    To optimize short-range force computations in Molecular Dynamics (MD) simulations, multi-threading and SIMD optimizations are presented in this paper. With respect to multi-threading optimization, a Partition-and-Separate-Calculation (PSC) method is designed to avoid write conflicts caused by using Newton's third law. Serial bottlenecks are eliminated with no additional memory usage. The method is implemented by using the OpenMP model. Furthermore, the PSC method is employed on Intel Xeon Phi coprocessors in both native and offload models. We also evaluate the performance of the PSC method under different thread affinities on the MIC architecture. In the SIMD execution, we explain the performance influence in the PSC method, considering the "if-clause" of the cutoff radius check. The experiment results show that our PSC method is relatively more efficient compared to some traditional methods. In double precision, our 256-bit SIMD implementation is about 3 times faster than the scalar version.

  14. MATCHED FILTER COMPUTATION ON FPGA, CELL, AND GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAKER, ZACHARY K.; GOKHALE, MAYA B.; TRIPP, JUSTIN L.

    2007-01-08

    The matched filter is an important kernel in the processing of hyperspectral data. The filter enables researchers to sift useful data from instruments that span large frequency bands. In this work, they evaluate the performance of a matched filter algorithm implementation on accelerated co-processor (XD1000), the IBM Cell microprocessor, and the NVIDIA GeForce 6900 GTX GPU graphics card. They provide extensive discussion of the challenges and opportunities afforded by each platform. In particular, they explore the problems of partitioning the filter most efficiently between the host CPU and the co-processor. Using their results, they derive several performance metrics that providemore » the optimal solution for a variety of application situations.« less

  15. Implementation of High-Order Multireference Coupled-Cluster Methods on Intel Many Integrated Core Architecture.

    PubMed

    Aprà, E; Kowalski, K

    2016-03-08

    In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.

  16. Requirements analysis for a hardware, discrete-event, simulation engine accelerator

    NASA Astrophysics Data System (ADS)

    Taylor, Paul J., Jr.

    1991-12-01

    An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.

  17. A high performance computing framework for physics-based modeling and simulation of military ground vehicles

    NASA Astrophysics Data System (ADS)

    Negrut, Dan; Lamb, David; Gorsich, David

    2011-06-01

    This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and Graphics Processing Units (GPUs). The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of subdomains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) the ability to partition the problem according to the one-to-one mapping; i.e., spatial subdivision, discussed above (pre-processing); (b) a protocol for passing data between any two co-processors; (c) algorithms for element proximity computation; and (d) the ability to carry out post-processing in a distributed fashion. In this contribution the components (a) and (b) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics (task (c) above), is shown to benefit on the GPU of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the United States Army. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Army, and shall not be used for advertising or product endorsement purposes.

  18. Multi-Kepler GPU vs. multi-Intel MIC for spin systems simulations

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Bisson, M.; Salvadore, F.

    2014-10-01

    We present and compare the performances of two many-core architectures: the Nvidia Kepler and the Intel MIC both in a single system and in cluster configuration for the simulation of spin systems. As a benchmark we consider the time required to update a single spin of the 3D Heisenberg spin glass model by using the Over-relaxation algorithm. We present data also for a traditional high-end multi-core architecture: the Intel Sandy Bridge. The results show that although on the two Intel architectures it is possible to use basically the same code, the performances of a Intel MIC change dramatically depending on (apparently) minor details. Another issue is that to obtain a reasonable scalability with the Intel Phi coprocessor (Phi is the coprocessor that implements the MIC architecture) in a cluster configuration it is necessary to use the so-called offload mode which reduces the performances of the single system. As to the GPU, the Kepler architecture offers a clear advantage with respect to the previous Fermi architecture maintaining exactly the same source code. Scalability of the multi-GPU implementation remains very good by using the CPU as a communication co-processor of the GPU. All source codes are provided for inspection and for double-checking the results.

  19. WATEQ4F - a personal computer Fortran translation of the geochemical model WATEQ2 with revised data base

    USGS Publications Warehouse

    Ball, J.W.; Nordstrom, D. Kirk; Zachmann, D.W.

    1987-01-01

    A FORTRAN 77 version of the PL/1 computer program for the geochemical model WATEQ2, which computes major and trace element speciation and mineral saturation for natural waters has been developed. The code (WATEQ4F) has been adapted to execute on an IBM PC or compatible microcomputer. Two versions of the code are available, one operating with IBM Professional FORTRAN and an 8087 or 89287 numeric coprocessor, and one which operates without a numeric coprocessor using Microsoft FORTRAN 77. The calculation procedure is identical to WATEQ2, which has been installed on many mainframes and minicomputers. Limited data base revisions include the addition of the following ions: AlHS04(++), BaS04, CaHS04(++), FeHS04(++), NaF, SrC03, and SrHCO3(+). This report provides the reactions and references for the data base revisions, instructions for program operation, and an explanation of the input and output files. Attachments contain sample output from three water analyses used as test cases and the complete FORTRAN source listing. U.S. Geological Survey geochemical simulation program PHREEQE and mass balance program BALANCE also have been adapted to execute on an IBM PC or compatible microcomputer with a numeric coprocessor and the IBM Professional FORTRAN compiler. (Author 's abstract)

  20. Exact diagonalization of quantum lattice models on coprocessors

    NASA Astrophysics Data System (ADS)

    Siro, T.; Harju, A.

    2016-10-01

    We implement the Lanczos algorithm on an Intel Xeon Phi coprocessor and compare its performance to a multi-core Intel Xeon CPU and an NVIDIA graphics processor. The Xeon and the Xeon Phi are parallelized with OpenMP and the graphics processor is programmed with CUDA. The performance is evaluated by measuring the execution time of a single step in the Lanczos algorithm. We study two quantum lattice models with different particle numbers, and conclude that for small systems, the multi-core CPU is the fastest platform, while for large systems, the graphics processor is the clear winner, reaching speedups of up to 7.6 compared to the CPU. The Xeon Phi outperforms the CPU with sufficiently large particle number, reaching a speedup of 2.5.

  1. B-MIC: An Ultrafast Three-Level Parallel Sequence Aligner Using MIC.

    PubMed

    Cui, Yingbo; Liao, Xiangke; Zhu, Xiaoqian; Wang, Bingqiang; Peng, Shaoliang

    2016-03-01

    Sequence alignment is the central process for sequence analysis, where mapping raw sequencing data to reference genome. The large amount of data generated by NGS is far beyond the process capabilities of existing alignment tools. Consequently, sequence alignment becomes the bottleneck of sequence analysis. Intensive computing power is required to address this challenge. Intel recently announced the MIC coprocessor, which can provide massive computing power. The Tianhe-2 is the world's fastest supercomputer now equipped with three MIC coprocessors each compute node. A key feature of sequence alignment is that different reads are independent. Considering this property, we proposed a MIC-oriented three-level parallelization strategy to speed up BWA, a widely used sequence alignment tool, and developed our ultrafast parallel sequence aligner: B-MIC. B-MIC contains three levels of parallelization: firstly, parallelization of data IO and reads alignment by a three-stage parallel pipeline; secondly, parallelization enabled by MIC coprocessor technology; thirdly, inter-node parallelization implemented by MPI. In this paper, we demonstrate that B-MIC outperforms BWA by a combination of those techniques using Inspur NF5280M server and the Tianhe-2 supercomputer. To the best of our knowledge, B-MIC is the first sequence alignment tool to run on Intel MIC and it can achieve more than fivefold speedup over the original BWA while maintaining the alignment precision.

  2. Efficient Variational Quantum Simulator Incorporating Active Error Minimization

    NASA Astrophysics Data System (ADS)

    Li, Ying; Benjamin, Simon C.

    2017-04-01

    One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.

  3. Implementation of the DAST ARW II control laws using an 8086 microprocessor and an 8087 floating-point coprocessor. [drones for aeroelasticity research

    NASA Technical Reports Server (NTRS)

    Kelly, G. L.; Berthold, G.; Abbott, L.

    1982-01-01

    A 5 MHZ single-board microprocessor system which incorporates an 8086 CPU and an 8087 Numeric Data Processor is used to implement the control laws for the NASA Drones for Aerodynamic and Structural Testing, Aeroelastic Research Wing II. The control laws program was executed in 7.02 msec, with initialization consuming 2.65 msec and the control law loop 4.38 msec. The software emulator execution times for these two tasks were 36.67 and 61.18, respectively, for a total of 97.68 msec. The space, weight and cost reductions achieved in the present, aircraft control application of this combination of a 16-bit microprocessor with an 80-bit floating point coprocessor may be obtainable in other real time control applications.

  4. ASICs Approach for the Implementation of a Symmetric Triangular Fuzzy Coprocessor and Its Application to Adaptive Filtering

    NASA Technical Reports Server (NTRS)

    Starks, Scott; Abdel-Hafeez, Saleh; Usevitch, Bryan

    1997-01-01

    This paper discusses the implementation of a fuzzy logic system using an ASICs design approach. The approach is based upon combining the inherent advantages of symmetric triangular membership functions and fuzzy singleton sets to obtain a novel structure for fuzzy logic system application development. The resulting structure utilizes a fuzzy static RAM to store the rule-base and the end-points of the triangular membership functions. This provides advantages over other approaches in which all sampled values of membership functions for all universes must be stored. The fuzzy coprocessor structure implements the fuzzification and defuzzification processes through a two-stage parallel pipeline architecture which is capable of executing complex fuzzy computations in less than 0.55us with an accuracy of more than 95%, thus making it suitable for a wide range of applications. Using the approach presented in this paper, a fuzzy logic rule-base can be directly downloaded via a host processor to an onchip rule-base memory with a size of 64 words. The fuzzy coprocessor's design supports up to 49 rules for seven fuzzy membership functions associated with each of the chip's two input variables. This feature allows designers to create fuzzy logic systems without the need for additional on-board memory. Finally, the paper reports on simulation studies that were conducted for several adaptive filter applications using the least mean squared adaptive algorithm for adjusting the knowledge rule-base.

  5. Accelerating the Pace of Protein Functional Annotation With Intel Xeon Phi Coprocessors.

    PubMed

    Feinstein, Wei P; Moreno, Juana; Jarrell, Mark; Brylinski, Michal

    2015-06-01

    Intel Xeon Phi is a new addition to the family of powerful parallel accelerators. The range of its potential applications in computationally driven research is broad; however, at present, the repository of scientific codes is still relatively limited. In this study, we describe the development and benchmarking of a parallel version of eFindSite, a structural bioinformatics algorithm for the prediction of ligand-binding sites in proteins. Implemented for the Intel Xeon Phi platform, the parallelization of the structure alignment portion of eFindSite using pragma-based OpenMP brings about the desired performance improvements, which scale well with the number of computing cores. Compared to a serial version, the parallel code runs 11.8 and 10.1 times faster on the CPU and the coprocessor, respectively; when both resources are utilized simultaneously, the speedup is 17.6. For example, ligand-binding predictions for 501 benchmarking proteins are completed in 2.1 hours on a single Stampede node equipped with the Intel Xeon Phi card compared to 3.1 hours without the accelerator and 36.8 hours required by a serial version. In addition to the satisfactory parallel performance, porting existing scientific codes to the Intel Xeon Phi architecture is relatively straightforward with a short development time due to the support of common parallel programming models by the coprocessor. The parallel version of eFindSite is freely available to the academic community at www.brylinski.org/efindsite.

  6. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  7. CoreTSAR: Core Task-Size Adapting Runtime

    DOE PAGES

    Scogland, Thomas R. W.; Feng, Wu-chun; Rountree, Barry; ...

    2014-10-27

    Heterogeneity continues to increase at all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other co-processors into everything from desktops to supercomputers. As a consequence, efficiently managing such disparate resources has become increasingly complex. CoreTSAR seeks to reduce this complexity by adaptively worksharing parallel-loop regions across compute resources without requiring any transformation of the code within the loop. Lastly, our results show performance improvements of up to three-fold over a current state-of-the-art heterogeneous task scheduler as well as linear performance scaling from a single GPU to four GPUs for many codes. In addition, CoreTSAR demonstratesmore » a robust ability to adapt to both a variety of workloads and underlying system configurations.« less

  8. Long sequence correlation coprocessor

    NASA Astrophysics Data System (ADS)

    Gage, Douglas W.

    1994-09-01

    A long sequence correlation coprocessor (LSCC) accelerates the bitwise correlation of arbitrarily long digital sequences by calculating in parallel the correlation score for 16, for example, adjacent bit alignments between two binary sequences. The LSCC integrated circuit is incorporated into a computer system with memory storage buffers and a separate general purpose computer processor which serves as its controller. Each of the LSCC's set of sequential counters simultaneously tallies a separate correlation coefficient. During each LSCC clock cycle, computer enable logic associated with each counter compares one bit of a first sequence with one bit of a second sequence to increment the counter if the bits are the same. A shift register assures that the same bit of the first sequence is simultaneously compared to different bits of the second sequence to simultaneously calculate the correlation coefficient by the different counters to represent different alignments of the two sequences.

  9. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  10. Flight code validation simulator

    NASA Astrophysics Data System (ADS)

    Sims, Brent A.

    1996-05-01

    An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.

  11. Flexible digital signal processing architecture for narrowband and spread-spectrum lock-in detection in multiphoton microscopy and time-resolved spectroscopy

    PubMed Central

    Wilson, Jesse W.; Park, Jong Kang; Warren, Warren S.

    2015-01-01

    The lock-in amplifier is a critical component in many different types of experiments, because of its ability to reduce spurious or environmental noise components by restricting detection to a single frequency and phase. One example application is pump-probe microscopy, a multiphoton technique that leverages excited-state dynamics for imaging contrast. With this application in mind, we present here the design and implementation of a high-speed lock-in amplifier on the field-programmable gate array (FPGA) coprocessor of a data acquisition board. The most important advantage is the inherent ability to filter signals based on more complex modulation patterns. As an example, we use the flexibility of the FPGA approach to enable a novel pump-probe detection scheme based on spread-spectrum communications techniques. PMID:25832238

  12. Flexible digital signal processing architecture for narrowband and spread-spectrum lock-in detection in multiphoton microscopy and time-resolved spectroscopy.

    PubMed

    Wilson, Jesse W; Park, Jong Kang; Warren, Warren S; Fischer, Martin C

    2015-03-01

    The lock-in amplifier is a critical component in many different types of experiments, because of its ability to reduce spurious or environmental noise components by restricting detection to a single frequency and phase. One example application is pump-probe microscopy, a multiphoton technique that leverages excited-state dynamics for imaging contrast. With this application in mind, we present here the design and implementation of a high-speed lock-in amplifier on the field-programmable gate array (FPGA) coprocessor of a data acquisition board. The most important advantage is the inherent ability to filter signals based on more complex modulation patterns. As an example, we use the flexibility of the FPGA approach to enable a novel pump-probe detection scheme based on spread-spectrum communications techniques.

  13. Single event effect testing of the Intel 80386 family and the 80486 microprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moran, A.; LaBel, K.; Gates, M.

    The authors present single event effect test results for the Intel 80386 microprocessor, the 80387 coprocessor, the 82380 peripheral device, and on the 80486 microprocessor. Both single event upset and latchup conditions were monitored.

  14. Evaluation of the Intel Xeon Phi Co-processor to accelerate the sensitivity map calculation for PET imaging

    NASA Astrophysics Data System (ADS)

    Dey, T.; Rodrigue, P.

    2015-07-01

    We aim to evaluate the Intel Xeon Phi coprocessor for acceleration of 3D Positron Emission Tomography (PET) image reconstruction. We focus on the sensitivity map calculation as one computational intensive part of PET image reconstruction, since it is a promising candidate for acceleration with the Many Integrated Core (MIC) architecture of the Xeon Phi. The computation of the voxels in the field of view (FoV) can be done in parallel and the 103 to 104 samples needed to calculate the detection probability of each voxel can take advantage of vectorization. We use the ray tracing kernels of the Embree project to calculate the hit points of the sample rays with the detector and in a second step the sum of the radiological path taking into account attenuation is determined. The core components are implemented using the Intel single instruction multiple data compiler (ISPC) to enable a portable implementation showing efficient vectorization either on the Xeon Phi and the Host platform. On the Xeon Phi, the calculation of the radiological path is also implemented in hardware specific intrinsic instructions (so-called `intrinsics') to allow manually-optimized vectorization. For parallelization either OpenMP and ISPC tasking (based on pthreads) are evaluated.Our implementation achieved a scalability factor of 0.90 on the Xeon Phi coprocessor (model 5110P) with 60 cores at 1 GHz. Only minor differences were found between parallelization with OpenMP and the ISPC tasking feature. The implementation using intrinsics was found to be about 12% faster than the portable ISPC version. With this version, a speedup of 1.43 was achieved on the Xeon Phi coprocessor compared to the host system (HP SL250s Gen8) equipped with two Xeon (E5-2670) CPUs, with 8 cores at 2.6 to 3.3 GHz each. Using a second Xeon Phi card the speedup could be further increased to 2.77. No significant differences were found between the results of the different Xeon Phi and the Host implementations. The examination showed that a reasonable speedup of sensitivity map calculation could be achieved on the Xeon Phi either by a portable or a hardware specific implementation.

  15. Earth system modelling on system-level heterogeneous architectures: EMAC (version 2.42) on the Dynamical Exascale Entry Platform (DEEP)

    NASA Astrophysics Data System (ADS)

    Christou, Michalis; Christoudias, Theodoros; Morillo, Julián; Alvarez, Damian; Merx, Hendrik

    2016-09-01

    We examine an alternative approach to heterogeneous cluster-computing in the many-core era for Earth system models, using the European Centre for Medium-Range Weather Forecasts Hamburg (ECHAM)/Modular Earth Submodel System (MESSy) Atmospheric Chemistry (EMAC) model as a pilot application on the Dynamical Exascale Entry Platform (DEEP). A set of autonomous coprocessors interconnected together, called Booster, complements a conventional HPC Cluster and increases its computing performance, offering extra flexibility to expose multiple levels of parallelism and achieve better scalability. The EMAC model atmospheric chemistry code (Module Efficiently Calculating the Chemistry of the Atmosphere (MECCA)) was taskified with an offload mechanism implemented using OmpSs directives. The model was ported to the MareNostrum 3 supercomputer to allow testing with Intel Xeon Phi accelerators on a production-size machine. The changes proposed in this paper are expected to contribute to the eventual adoption of Cluster-Booster division and Many Integrated Core (MIC) accelerated architectures in presently available implementations of Earth system models, towards exploiting the potential of a fully Exascale-capable platform.

  16. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    DOE PAGES

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...

    2015-05-22

    The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less

  17. High-performance floating-point image computing workstation for medical applications

    NASA Astrophysics Data System (ADS)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.

  18. Fast Fourier Transform Co-processor (FFTC), towards embedded GFLOPs

    NASA Astrophysics Data System (ADS)

    Kuehl, Christopher; Liebstueckel, Uwe; Tejerina, Isaac; Uemminghaus, Michael; Witte, Felix; Kolb, Michael; Suess, Martin; Weigand, Roland; Kopp, Nicholas

    2012-10-01

    Many signal processing applications and algorithms perform their operations on the data in the transform domain to gain efficiency. The Fourier Transform Co-Processor has been developed with the aim to offload General Purpose Processors from performing these transformations and therefore to boast the overall performance of a processing module. The IP of the commercial PowerFFT processor has been selected and adapted to meet the constraints of the space environment. In frame of the ESA activity "Fast Fourier Transform DSP Co-processor (FFTC)" (ESTEC/Contract No. 15314/07/NL/LvH/ma) the objectives were the following: • Production of prototypes of a space qualified version of the commercial PowerFFT chip called FFTC based on the PowerFFT IP. • The development of a stand-alone FFTC Accelerator Board (FTAB) based on the FFTC including the Controller FPGA and SpaceWire Interfaces to verify the FFTC function and performance. The FFTC chip performs its calculations with floating point precision. Stand alone it is capable computing FFTs of up to 1K complex samples in length in only 10μsec. This corresponds to an equivalent processing performance of 4.7 GFlops. In this mode the maximum sustained data throughput reaches 6.4Gbit/s. When connected to up to 4 EDAC protected SDRAM memory banks the FFTC can perform long FFTs with up to 1M complex samples in length or multidimensional FFT-based processing tasks. A Controller FPGA on the FTAB takes care of the SDRAM addressing. The instructions commanded via the Controller FPGA are used to set up the data flow and generate the memory addresses. The paper will give an overview on the project, including the results of the validation of the FFTC ASIC prototypes.

  19. GPU-Accelerated Molecular Modeling Coming Of Age

    PubMed Central

    Stone, John E.; Hardy, David J.; Ufimtsev, Ivan S.

    2010-01-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. PMID:20675161

  20. GPU-accelerated molecular modeling coming of age.

    PubMed

    Stone, John E; Hardy, David J; Ufimtsev, Ivan S; Schulten, Klaus

    2010-09-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. (c) 2010 Elsevier Inc. All rights reserved.

  1. Reviews.

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 1988

    1988-01-01

    Reviews two computer programs: "Molecular Graphics," which allows molecule manipulation in three-dimensional space (requiring IBM PC with 512K, EGA monitor, and math coprocessor); and "Periodic Law," a database which contains up to 20 items of information on each of the first 103 elements (Apple II or IBM PC). (MVL)

  2. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  3. Accelerating finite-rate chemical kinetics with coprocessors: Comparing vectorization methods on GPUs, MICs, and CPUs

    NASA Astrophysics Data System (ADS)

    Stone, Christopher P.; Alferman, Andrew T.; Niemeyer, Kyle E.

    2018-05-01

    Accurate and efficient methods for solving stiff ordinary differential equations (ODEs) are a critical component of turbulent combustion simulations with finite-rate chemistry. The ODEs governing the chemical kinetics at each mesh point are decoupled by operator-splitting allowing each to be solved concurrently. An efficient ODE solver must then take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and a nonstiff Runge-Kutta ODE solver are both implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms within OpenCL. Both methods solve multiple ODEs concurrently within the same instruction stream. The performance of these parallel implementations was measured on three chemical kinetic models of increasing size across several multicore and many-core platforms. Two separate benchmarks were conducted to clearly determine any performance advantage offered by either method. The first benchmark measured the run-time of evaluating the right-hand-side source terms in parallel and the second benchmark integrated a series of constant-pressure, homogeneous reactors using the Rosenbrock and Runge-Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded C++ code. The SIMT parallel model on the host and Phi was 13%-35% slower than the baseline while the SIMT model on the NVIDIA Kepler GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased significantly with the SIMD implementations on the host CPU (2.5-2.7 ×) and Xeon Phi coprocessor (4.7-4.9 ×) compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.5-1.6 times faster than the baseline multithreaded CPU code; however, this was significantly slower than the SIMD versions on the host CPU or the Xeon Phi. The performance difference between the three platforms was attributed to thread divergence caused by the adaptive step-sizes within the ODE integrators. Analysis showed that the wider vector width of the GPU incurs a higher level of divergence than the narrower Sandy Bridge or Xeon Phi. The significant performance improvement provided by the SIMD parallel strategy motivates further research into more ODE solver methods that are both SIMD-friendly and computationally efficient.

  4. Neuromorphic Computing: A Post-Moore's Law Complementary Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuman, Catherine D; Birdwell, John Douglas; Dean, Mark

    2016-01-01

    We describe our approach to post-Moore's law computing with three neuromorphic computing models that share a RISC philosophy, featuring simple components combined with a flexible and programmable structure. We envision these to be leveraged as co-processors, or as data filters to provide in situ data analysis in supercomputing environments.

  5. Stream network and stream segment temperature models software

    USGS Publications Warehouse

    Bartholow, John

    2010-01-01

    This set of programs simulates steady-state stream temperatures throughout a dendritic stream network handling multiple time periods per year. The software requires a math co-processor and 384K RAM. Also included is a program (SSTEMP) designed to predict the steady state stream temperature within a single stream segment for a single time period.

  6. Dr. Sanger's Apprentice: A Computer-Aided Instruction to Protein Sequencing.

    ERIC Educational Resources Information Center

    Schmidt, Thomas G.; Place, Allen R.

    1985-01-01

    Modeled after the program "Mastermind," this program teaches students the art of protein sequencing. The program (written in Turbo Pascal for the IBM PC, requiring 128K, a graphics adapter, and an 8070 mathematics coprocessor) generates a polypeptide whose sequence and length can be user-defined (for practice) or computer-generated (for…

  7. 15 CFR 740.19 - Consumer Communications Devices (CCD).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... EAR99 or classified under Export Control Classification Number (ECCN) 4A994.b that do not exceed an... classified under ECCN 5A992 or designated EAR99; (3) Input/output control units (other than industrial... coprocessors designated EAR99; (5) Monitors classified under ECCN 5A992 or designated EAR99; (6) Printers...

  8. 15 CFR 740.19 - Consumer Communications Devices (CCD).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... EAR99 or classified under Export Control Classification Number (ECCN) 4A994.b that do not exceed an... classified under ECCN 5A992 or designated EAR99; (3) Input/output control units (other than industrial... coprocessors designated EAR99; (5) Monitors classified under ECCN 5A992 or designated EAR99; (6) Printers...

  9. 15 CFR 740.19 - Consumer Communications Devices (CCD).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... EAR99 or classified under Export Control Classification Number (ECCN) 4A994.b that do not exceed an... classified under ECCN 5A992 or designated EAR99; (3) Input/output control units (other than industrial... coprocessors designated EAR99; (5) Monitors classified under ECCN 5A992 or designated EAR99; (6) Printers...

  10. 15 CFR 740.19 - Consumer Communications Devices (CCD).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... EAR99 or classified under Export Control Classification Number (ECCN) 4A994.b that do not exceed an... classified under ECCN 5A992 or designated EAR99; (3) Input/output control units (other than industrial... coprocessors designated EAR99; (5) Monitors classified under ECCN 5A992 or designated EAR99; (6) Printers...

  11. Particle-in-Cell laser-plasma simulation on Xeon Phi coprocessors

    NASA Astrophysics Data System (ADS)

    Surmin, I. A.; Bastrakov, S. I.; Efimenko, E. S.; Gonoskov, A. A.; Korzhimanov, A. V.; Meyerov, I. B.

    2016-05-01

    This paper concerns the development of a high-performance implementation of the Particle-in-Cell method for plasma simulation on Intel Xeon Phi coprocessors. We discuss the suitability of the method for Xeon Phi architecture and present our experience in the porting and optimization of the existing parallel Particle-in-Cell code PICADOR. Direct porting without code modification gives performance on Xeon Phi close to that of an 8-core CPU on a benchmark problem with 50 particles per cell. We demonstrate step-by-step optimization techniques, such as improving data locality, enhancing parallelization efficiency and vectorization leading to an overall 4.2 × speedup on CPU and 7.5 × on Xeon Phi compared to the baseline version. The optimized version achieves 16.9 ns per particle update on an Intel Xeon E5-2660 CPU and 9.3 ns per particle update on an Intel Xeon Phi 5110P. For a real problem of laser ion acceleration in targets with surface grating, where a large number of macroparticles per cell is required, the speedup of Xeon Phi compared to CPU is 1.6 ×.

  12. A low power biomedical signal processor ASIC based on hardware software codesign.

    PubMed

    Nie, Z D; Wang, L; Chen, W G; Zhang, T; Zhang, Y T

    2009-01-01

    A low power biomedical digital signal processor ASIC based on hardware and software codesign methodology was presented in this paper. The codesign methodology was used to achieve higher system performance and design flexibility. The hardware implementation included a low power 32bit RISC CPU ARM7TDMI, a low power AHB-compatible bus, and a scalable digital co-processor that was optimized for low power Fast Fourier Transform (FFT) calculations. The co-processor could be scaled for 8-point, 16-point and 32-point FFTs, taking approximate 50, 100 and 150 clock circles, respectively. The complete design was intensively simulated using ARM DSM model and was emulated by ARM Versatile platform, before conducted to silicon. The multi-million-gate ASIC was fabricated using SMIC 0.18 microm mixed-signal CMOS 1P6M technology. The die area measures 5,000 microm x 2,350 microm. The power consumption was approximately 3.6 mW at 1.8 V power supply and 1 MHz clock rate. The power consumption for FFT calculations was less than 1.5 % comparing with the conventional embedded software-based solution.

  13. Discrete Particle Model for Porous Media Flow using OpenFOAM at Intel Xeon Phi Coprocessors

    NASA Astrophysics Data System (ADS)

    Shang, Zhi; Nandakumar, Krishnaswamy; Liu, Honggao; Tyagi, Mayank; Lupo, James A.; Thompson, Karten

    2015-11-01

    The discrete particle model (DPM) in OpenFOAM was used to study the turbulent solid particle suspension flows through the porous media of a natural dual-permeability rock. The 2D and 3D pore geometries of the porous media were generated by sphere packing with the radius ratio of 3. The porosity is about 38% same as the natural dual-permeability rock. In the 2D case, the mesh cells reach 5 million with 1 million solid particles and in the 3D case, the mesh cells are above 10 million with 5 million solid particles. The solid particles are distributed by Gaussian distribution from 20 μm to 180 μm with expectation as 100 μm. Through the numerical simulations, not only was the HPC studied using Intel Xeon Phi Coprocessors but also the flow behaviors of large scale solid suspension flows in porous media were studied. The authors would like to thank the support by IPCC@LSU-Intel Parallel Computing Center (LSU # Y1SY1-1) and the HPC resources at Louisiana State University (http://www.hpc.lsu.edu).

  14. Speeding-up Bioinformatics Algorithms with Heterogeneous Architectures: Highly Heterogeneous Smith-Waterman (HHeterSW).

    PubMed

    Gálvez, Sergio; Ferusic, Adis; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan A; Dorado, Gabriel

    2016-10-01

    The Smith-Waterman algorithm has a great sensitivity when used for biological sequence-database searches, but at the expense of high computing-power requirements. To overcome this problem, there are implementations in literature that exploit the different hardware-architectures available in a standard PC, such as GPU, CPU, and coprocessors. We introduce an application that splits the original database-search problem into smaller parts, resolves each of them by executing the most efficient implementations of the Smith-Waterman algorithms in different hardware architectures, and finally unifies the generated results. Using non-overlapping hardware allows simultaneous execution, and up to 2.58-fold performance gain, when compared with any other algorithm to search sequence databases. Even the performance of the popular BLAST heuristic is exceeded in 78% of the tests. The application has been tested with standard hardware: Intel i7-4820K CPU, Intel Xeon Phi 31S1P coprocessors, and nVidia GeForce GTX 960 graphics cards. An important increase in performance has been obtained in a wide range of situations, effectively exploiting the available hardware.

  15. High performance flight computer developed for deep space applications

    NASA Technical Reports Server (NTRS)

    Bunker, Robert L.

    1993-01-01

    The development of an advanced space flight computer for real time embedded deep space applications which embodies the lessons learned on Galileo and modern computer technology is described. The requirements are listed and the design implementation that meets those requirements is described. The development of SPACE-16 (Spaceborne Advanced Computing Engine) (where 16 designates the databus width) was initiated to support the MM2 (Marine Mark 2) project. The computer is based on a radiation hardened emulation of a modern 32 bit microprocessor and its family of support devices including a high performance floating point accelerator. Additional custom devices which include a coprocessor to improve input/output capabilities, a memory interface chip, and an additional support chip that provide management of all fault tolerant features, are described. Detailed supporting analyses and rationale which justifies specific design and architectural decisions are provided. The six chip types were designed and fabricated. Testing and evaluation of a brass/board was initiated.

  16. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  17. All-memristive neuromorphic computing with level-tuned neurons

    NASA Astrophysics Data System (ADS)

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-01

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  18. All-memristive neuromorphic computing with level-tuned neurons.

    PubMed

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-02

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  19. PC-BASED MIE SCATTERING PROGRAM FOR THEORETICAL INVESTIGATIONS OF THE OPTICAL PROPERTIES OF ATMOSPHERIC AEROSOLS AS A FUNCTION OF COMPOSITION AND RELATIVE HUMIDITY

    EPA Science Inventory

    Over the past decade there has been interest in exploring possible relationships between atmospheric visibility (extinction of light) and the chemical form of aerosols in the atmosphere. ser-friendly, menu-driven program for the personal computer (AT 286 with math co-processor or...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yunshan; DeVore, Peter T. S.; Jalali, Bahram

    Optical computing accelerators help alleviate bandwidth and power consumption bottlenecks in electronics. In this paper, we show an approach to implementing logarithmic-type analog co-processors in silicon photonics and use it to perform the exponentiation operation and the recovery of a signal in the presence of multiplicative distortion. Finally, the function is realized by exploiting nonlinear-absorption-enhanced Raman amplification saturation in a silicon waveguide.

  1. Spring 2006. Industry Study. Information Technology Industry

    DTIC Science & Technology

    2006-01-01

    unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 i Information Technology 2006 ABSTRACT...integration of processors, coprocessors, memory, storage, etc. into a user-programmable final product. C . Software (Apple, Oracle): These firms...able to support the U.S. national security interests. C . Manufacturing: The personal computer manufacturing industry has also changed considerably

  2. High Speed Oblivious Random Access Memory (HS-ORAM)

    DTIC Science & Technology

    2015-09-01

    Bryan Parno, “Non-interactive verifiable computing: Outsourcing computation to untrusted workers”, 30th International Cryptology Conference, pp. 465...holder or any other person or corporation; or convey any rights or permission to manufacture , use, or sell any patented invention that may relate to...secure outsourced data access protocols. HS-ORAM deploys a number of server- side software components running inside tamper-proof secure coprocessors

  3. Scaling Support Vector Machines On Modern HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Fu, Haohuan; Song, Shuaiwen

    2015-02-01

    We designed and implemented MIC-SVM, a highly efficient parallel SVM for x86 based multicore and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools.

  4. Analog optical computing primitives in silicon photonics

    DOE PAGES

    Jiang, Yunshan; DeVore, Peter T. S.; Jalali, Bahram

    2016-03-15

    Optical computing accelerators help alleviate bandwidth and power consumption bottlenecks in electronics. In this paper, we show an approach to implementing logarithmic-type analog co-processors in silicon photonics and use it to perform the exponentiation operation and the recovery of a signal in the presence of multiplicative distortion. Finally, the function is realized by exploiting nonlinear-absorption-enhanced Raman amplification saturation in a silicon waveguide.

  5. Implementation and Assessment of Advanced Analog Vector-Matrix Processor

    NASA Technical Reports Server (NTRS)

    Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.

  6. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  7. Efficient sparse matrix-matrix multiplication for computing periodic responses by shooting method on Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Stoykov, S.; Atanassov, E.; Margenov, S.

    2016-10-01

    Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.

  8. Transparently Interposing User Code at the System Interface

    DTIC Science & Technology

    1992-09-01

    trademarks of Symantec Corporation. AFS is a trademark of Transarc Corporation. PC-cillin is a trademark of Trend Micro Devices, Incorporated. Scribe is a...communication. Finally, both the Norton AntiVirus [Symantec 91b] and PC-cillin [ Trend 90] anti-virus applications intercept destructive file operations made... Trend Micro Devices, Incorporated, 1990. [Tygar & Yee 91] J. D. Tygar, Bennet Yee. Dyad: A System for Using Physically Secure Coprocessors

  9. HPC Programming on Intel Many-Integrated-Core Hardware with MAGMA Port to Xeon Phi

    DOE PAGES

    Dongarra, Jack; Gates, Mark; Haidar, Azzam; ...

    2015-01-01

    This paper presents the design and implementation of several fundamental dense linear algebra (DLA) algorithms for multicore with Intel Xeon Phi coprocessors. In particular, we consider algorithms for solving linear systems. Further, we give an overview of the MAGMA MIC library, an open source, high performance library, that incorporates the developments presented here and, more broadly, provides the DLA functionality equivalent to that of the popular LAPACK library while targeting heterogeneous architectures that feature a mix of multicore CPUs and coprocessors. The LAPACK-compliance simplifies the use of the MAGMA MIC library in applications, while providing them with portably performant DLA.more » High performance is obtained through the use of the high-performance BLAS, hardware-specific tuning, and a hybridization methodology whereby we split the algorithm into computational tasks of various granularities. Execution of those tasks is properly scheduled over the heterogeneous hardware by minimizing data movements and mapping algorithmic requirements to the architectural strengths of the various heterogeneous hardware components. Our methodology and programming techniques are incorporated into the MAGMA MIC API, which abstracts the application developer from the specifics of the Xeon Phi architecture and is therefore applicable to algorithms beyond the scope of DLA.« less

  10. Comparison of the new intermediate complex atmospheric research (ICAR) model with the WRF model in a mesoscale catchment in Central Europe

    NASA Astrophysics Data System (ADS)

    Härer, Stefan; Bernhardt, Matthias; Gutmann, Ethan; Bauer, Hans-Stefan; Schulz, Karsten

    2017-04-01

    Until recently, a large gap existed in the atmospheric downscaling strategies. On the one hand, computationally efficient statistical approaches are widely used, on the other hand, dynamic but CPU-intensive numeric atmospheric models like the weather research and forecast (WRF) model exist. The intermediate complex atmospheric research (ICAR) model developed at NCAR (Boulder, Colorado, USA) addresses this gap by combining the strengths of both approaches: the process-based structure of a dynamic model and its applicability in a changing climate as well as the speed of a parsimonious modelling approach which facilitates the modelling of ensembles and a straightforward way to test new parametrization schemes as well as various input data sources. However, the ICAR model has not been tested in Europe and on slightly undulated terrain yet. This study now evaluates for the first time the ICAR model to WRF model runs in Central Europe comparing a complete year of model results in the mesoscale Attert catchment (Luxembourg). In addition to these modelling results, we also describe the first implementation of ICAR on an Intel Phi architecture and consequently perform speed tests between the Vienna cluster, a standard workstation and the use of an Intel Phi coprocessor. Finally, the study gives an outlook on sensitivity studies using slightly different input data sources.

  11. Using Secure Coprocessors

    DTIC Science & Technology

    1994-05-01

    can easily envision using a microkernel such as Mach 3.0 [31], the NT executive [20], or QNX [40]. We only need to add a communications server and...of the system software architecture. Both hardware subsytems run the CMU Mach 3.0 microkernel [311: the host has special device drivers to support...in the Mach microkernel and a higher- level driver in the Unix server. The low-level drivers handle interrupts and simple device data transfers to

  12. A Qualitative Security Analysis of a New Class of 3-D Integrated Crypto Co-processors

    DTIC Science & Technology

    2012-01-01

    and mobile phones, lottery ticket vending machines , and various electronic payment systems. The main reason for their use in such applications is that...military applications such as secure communication links. However, the proliferation of Automated Teller Machines (ATMs) in the ’80s introduced them to...commercial applications. Today many popular consumer devices have cryptographic processors in them, for example, smart- cards for pay-TV access machines

  13. (M-CAT) Minor Caliber Weapons Trainer MK-19, 40mm Machine Gun

    DTIC Science & Technology

    1989-07-24

    microprocessor chip with an Intel 387 math coprocessor. The Nova 620 is a digital time base corrector. It is used to time base correct the video data...the circuit. After filtering, the horizontal and vertical position signals are converted to digital values by the Data Translation (DTX-311) analog...from the computer. Each frame of the video disk is individually digitized as to target size, location, and range. The guns azimuth and elevation are

  14. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)

    1992-01-01

    High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.

  15. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Daud, Taher (Inventor); Thakoor, Anilkumar P. (Inventor)

    1995-01-01

    High-speed, analog, fully-parallel and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware-compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A comparison-intensive feature classification application has been demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as application-specific-coprocessors for solving real-world problems at extremely high data rates.

  16. Evaluation of a Constrained Facet Analysis Efficiency Model for Identifying the Efficiency of Medical Treatment Facilities in the Army Medical Department

    DTIC Science & Technology

    1990-07-31

    examples on their use is available with the PASS User Documentation Manual. 2 The data structure of PASS requires a three- lvel organizational...files, and missing control variables. A specific problem noted involved the absence of 8087 mathematical co-processor on the target IBM-XT 21 machine...System, required an operational understanding of the advanced mathematical technique used in the model. Problems with the original release of the PASS

  17. A SLAM II simulation model for analyzing space station mission processing requirements

    NASA Technical Reports Server (NTRS)

    Linton, D. G.

    1985-01-01

    Space station mission processing is modeled via the SLAM 2 simulation language on an IBM 4381 mainframe and an IBM PC microcomputer with 620K RAM, two double-sided disk drives and an 8087 coprocessor chip. Using a time phased mission (payload) schedule and parameters associated with the mission, orbiter (space shuttle) and ground facility databases, estimates for ground facility utilization are computed. Simulation output associated with the science and applications database is used to assess alternative mission schedules.

  18. Floating-point function generation routines for 16-bit microcomputers

    NASA Technical Reports Server (NTRS)

    Mackin, M. A.; Soeder, J. F.

    1984-01-01

    Several computer subroutines have been developed that interpolate three types of nonanalytic functions: univariate, bivariate, and map. The routines use data in floating-point form. However, because they are written for use on a 16-bit Intel 8086 system with an 8087 mathematical coprocessor, they execute as fast as routines using data in scaled integer form. Although all of the routines are written in assembly language, they have been implemented in a modular fashion so as to facilitate their use with high-level languages.

  19. Functional Specification and Simulation of a Floating Point Co-Processor for SPUR

    DTIC Science & Technology

    1986-08-01

    depend on this state will not be stable until the next phase; this leaves the problem of how to control events that must occur on phi 1 of a cycle. The... problems with the structure of the chip description. The worst of these problems is the absence of Slang constructs for coding separate chip component...constructs such as UNK as well. Another related problem was the inability to explicitly declare the size of Slang node values. \\Vhile the correct

  20. The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware

    NASA Astrophysics Data System (ADS)

    Kathiara, Jainik

    There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.

  1. TH-AB-207A-07: Radiation Dose Simulation for a Newly Proposed Dynamic Bowtie Filters for CT Using Fast Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Lin, H; Gao, Y

    Purpose: Dynamic bowtie filter is an innovative design capable of modulating the X-ray and balancing the flux in the detectors, and it introduces a new way of patient-specific CT scan optimizations. This study demonstrates the feasibility of performing fast Monte Carlo dose calculation for a type of dynamic bowtie filter for cone-beam CT (Liu et al. 2014 9(7) PloS one) using MIC coprocessors. Methods: The dynamic bowtie filter in question consists of a highly attenuating bowtie component (HB) and a weakly attenuating bowtie (WB). The HB is filled with CeCl3 solution and its surface is defined by a transcendental equation.more » The WB is an elliptical cylinder filled with air and immersed in the HB. As the scanner rotates, the orientation of WB remains the same with the static patient. In our Monte Carlo simulation, the HB was approximated by 576 boxes. The phantom was a voxelized elliptical cylinder composed of PMMA and surrounded by air (44cm×44cm×40cm, 1000×1000×1 voxels). The dose to the PMMA phantom was tallied with 0.15% statistical uncertainty under 100 kVp source. Two Monte Carlo codes ARCHER and MCNP-6.1 were compared. Both used double-precision. Compiler flags that may trade accuracy for speed were avoided. Results: The wall time of the simulation was 25.4 seconds by ARCHER on a 5110P MIC, 40 seconds on a X5650 CPU, and 523 seconds by the multithreaded MCNP on the same CPU. The high performance of ARCHER is attributed to the parameterized geometry and vectorization of the program hotspots. Conclusion: The dynamic bowtie filter modeled in this study is able to effectively reduce the dynamic range of the detected signals for the photon-counting detectors. With appropriate software optimization methods, the accelerator-based (MIC and GPU) Monte Carlo dose engines have shown good performance and can contribute to patient-specific CT scan optimizations.« less

  2. DCMS: A data analytics and management system for molecular simulation.

    PubMed

    Kumar, Anand; Grupcev, Vladimir; Berrada, Meryem; Fogarty, Joseph C; Tu, Yi-Cheng; Zhu, Xingquan; Pandit, Sagar A; Xia, Yuni

    Molecular Simulation (MS) is a powerful tool for studying physical/chemical features of large systems and has seen applications in many scientific and engineering domains. During the simulation process, the experiments generate a very large number of atoms and intend to observe their spatial and temporal relationships for scientific analysis. The sheer data volumes and their intensive interactions impose significant challenges for data accessing, managing, and analysis. To date, existing MS software systems fall short on storage and handling of MS data, mainly because of the missing of a platform to support applications that involve intensive data access and analytical process. In this paper, we present the database-centric molecular simulation (DCMS) system our team developed in the past few years. The main idea behind DCMS is to store MS data in a relational database management system (DBMS) to take advantage of the declarative query interface ( i.e. , SQL), data access methods, query processing, and optimization mechanisms of modern DBMSs. A unique challenge is to handle the analytical queries that are often compute-intensive. For that, we developed novel indexing and query processing strategies (including algorithms running on modern co-processors) as integrated components of the DBMS. As a result, researchers can upload and analyze their data using efficient functions implemented inside the DBMS. Index structures are generated to store analysis results that may be interesting to other users, so that the results are readily available without duplicating the analysis. We have developed a prototype of DCMS based on the PostgreSQL system and experiments using real MS data and workload show that DCMS significantly outperforms existing MS software systems. We also used it as a platform to test other data management issues such as security and compression.

  3. INM. Integrated Noise Model Version 4.11. User’s Guide - Supplement

    DTIC Science & Technology

    1993-12-01

    KB of Random Access Memory (RAM) or 3 MB of RAM, if operating the INM from a RAM disk, as discussed in Section 1.2.1 below; 0 Math co-processor, Series... accessible from the Data Base using the ACDB11.EXE computer program, supplied with the Version 4.11 release. With the exception of INM airplane numbers 1, 6...9214 10760 -- -.-- 27 7053 6215 9470 10703 --- --- - 28 SS7 5940 SS94 729S . ... ... 29 4223 4884 7897 9214 10760 ..... 30 sots 6474 7939 8774

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poivey, C.; Notebaert, O.; Garnier, P.

    The ARIANE5 On Board Computer (OBC) and Inertial Reference System (SRI) are based on Motorola MC68020 processor and MC68882 coprocessor. The SRI data acquisition board also uses the DSP TMS320C25 from Texas Instruments. These devices were characterized to proton induced SEUs. But representativeness of SEU test results on processors was questioned during ARIANE5 studies. Protons test of these devices were also performed in the actual equipments with flight (or representative of) softwares. The results show that the On Board Computer and the Inertial Reference System can satisfy the requirements of the ARIANE5 missions.

  5. International Aerospace and Ground Conference on Lightning and Static Electricity (15th) Held in Atlantic City, New Jersey on October 6 - 8, 1992. Addendum

    DTIC Science & Technology

    1992-11-01

    November 1992 1992 INTERNATIONAL AEROSPACE AND GROUND CONFERENCE 6. Perfrming Orgnis.aten Code ON LIGHTNING AND STATIC ELECTRICITY - ADDENDUM 111...October 6-8 1992 Program and the Federal Aviation Administration 14. Sponsoring Agency Code Technical Center ACD-230 15. Supplementary Metes The NICG...area]. The program runs well on an IBM PC or compatible 386 with a math co-processor 387 chip and a VGA monitor. For this study, streamers were added

  6. Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Yamada, Masako

    The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that includemore » many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.« less

  7. Heterogeneous computing architecture for fast detection of SNP-SNP interactions.

    PubMed

    Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros

    2014-06-25

    The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.

  8. Revisiting Intel Xeon Phi optimization of Thompson cloud microphysics scheme in Weather Research and Forecasting (WRF) model

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2015-10-01

    The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.

  9. Heterogeneous computing architecture for fast detection of SNP-SNP interactions

    PubMed Central

    2014-01-01

    Background The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. Results We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. Conclusions General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems. PMID:24964802

  10. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  11. Performance Study of Monte Carlo Codes on Xeon Phi Coprocessors — Testing MCNP 6.1 and Profiling ARCHER Geometry Module on the FS7ONNi Problem

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George

    2017-09-01

    This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.

  12. Validity of the iPhone M7 motion co-processor as a pedometer for able-bodied ambulation.

    PubMed

    Major, Matthew J; Alford, Micah

    2016-12-01

    Physical activity benefits for disease prevention are well-established. Smartphones offer a convenient platform for community-based step count estimation to monitor and encourage physical activity. Accuracy is dependent on hardware-software platforms, creating a recurring challenge for validation, but the Apple iPhone® M7 motion co-processor provides a standardised method that helps address this issue. Validity of the M7 to record step count for level-ground, able-bodied walking at three self-selected speeds, and agreement with the StepWatch TM was assessed. Steps were measured concurrently with the iPhone® (custom application to extract step count), StepWatch TM and manual count. Agreement between iPhone® and manual/StepWatch TM count was estimated through Pearson correlation and Bland-Altman analyses. Data from 20 participants suggested that iPhone® step count correlations with manual and StepWatch TM were strong for customary (1.3 ± 0.1 m/s) and fast (1.8 ± 0.2 m/s) speeds, but weak for the slow (1.0 ± 0.1 m/s) speed. Mean absolute error (manual-iPhone®) was 21%, 8% and 4% for the slow, customary and fast speeds, respectively. The M7 accurately records step count during customary and fast walking speeds, but is prone to considerable inaccuracies at slow speeds which has important implications for certain patient groups. The iPhone® may be a suitable alternative to the StepWatch TM for only faster walking speeds.

  13. Toward GEOS-6, A Global Cloud System Resolving Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putman, William M.

    2010-01-01

    NASA is committed to observing and understanding the weather and climate of our home planet through the use of multi-scale modeling systems and space-based observations. Global climate models have evolved to take advantage of the influx of multi- and many-core computing technologies and the availability of large clusters of multi-core microprocessors. GEOS-6 is a next-generation cloud system resolving atmospheric model that will place NASA at the forefront of scientific exploration of our atmosphere and climate. Model simulations with GEOS-6 will produce a realistic representation of our atmosphere on the scale of typical satellite observations, bringing a visual comprehension of model results to a new level among the climate enthusiasts. In preparation for GEOS-6, the agency's flagship Earth System Modeling Framework [JDl] has been enhanced to support cutting-edge high-resolution global climate and weather simulations. Improvements include a cubed-sphere grid that exposes parallelism; a non-hydrostatic finite volume dynamical core, and algorithm designed for co-processor technologies, among others. GEOS-6 represents a fundamental advancement in the capability of global Earth system models. The ability to directly compare global simulations at the resolution of spaceborne satellite images will lead to algorithm improvements and better utilization of space-based observations within the GOES data assimilation system

  14. Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Kalamkar, Dhiraj; Singh, Amik

    2012-12-01

    Multigrid methods are widely used to accelerate the convergence of iterative solvers for linear systems used in a number of different application areas. In this report, we describe miniGMG, our compact geometric multigrid benchmark designed to proxy the multigrid solves found in AMR applications. We explore optimization techniques for geometric multigrid on existing and emerging multicore systems including the Opteron-based Cray XE6, Intel Sandy Bridge and Nehalem-based Infiniband clusters, as well as manycore-based architectures including NVIDIA's Fermi and Kepler GPUs and Intel's Knights Corner (KNC) co-processor. This report examines a variety of novel techniques including communication-aggregation, threaded wavefront-based DRAM communication-avoiding,more » dynamic threading decisions, SIMDization, and fusion of operators. We quantify performance through each phase of the V-cycle for both single-node and distributed-memory experiments and provide detailed analysis for each class of optimization. Results show our optimizations yield significant speedups across a variety of subdomain sizes while simultaneously demonstrating the potential of multi- and manycore processors to dramatically accelerate single-node performance. However, our analysis also indicates that improvements in networks and communication will be essential to reap the potential of manycore processors in large-scale multigrid calculations.« less

  15. Aging in the three-dimensional random-field Ising model

    NASA Astrophysics Data System (ADS)

    von Ohr, Sebastian; Manssen, Markus; Hartmann, Alexander K.

    2017-07-01

    We studied the nonequilibrium aging behavior of the random-field Ising model in three dimensions for various values of the disorder strength. This allowed us to investigate how the aging behavior changes across the ferromagnetic-paramagnetic phase transition. We investigated a large system size of N =2563 spins and up to 108 Monte Carlo sweeps. To reach these necessary long simulation times, we employed an implementation running on Intel Xeon Phi coprocessors, reaching single-spin-flip times as short as 6 ps. We measured typical correlation functions in space and time to extract a growing length scale and corresponding exponents.

  16. Intel Many Integrated Core (MIC) architecture optimization strategies for a memory-bound Weather Research and Forecasting (WRF) Goddard microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Goddard cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The WRF is a widely used weather prediction system in the world. It development is a done in collaborative around the globe. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the code of this important part of WRF. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU do. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 4.7x. Furthermore, the same optimizations improved performance on a dual socket Intel Xeon E5-2670 system by a factor of 2.8x compared to the original code.

  17. Performance tuning Weather Research and Forecasting (WRF) Goddard longwave radiative transfer scheme on Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2015-10-01

    Next-generation mesoscale numerical weather prediction system, the Weather Research and Forecasting (WRF) model, is a designed for dual use for forecasting and research. WRF offers multiple physics options that can be combined in any way. One of the physics options is radiance computation. The major source for energy for the earth's climate is solar radiation. Thus, it is imperative to accurately model horizontal and vertical distribution of the heating. Goddard solar radiative transfer model includes the absorption duo to water vapor,ozone, ozygen, carbon dioxide, clouds and aerosols. The model computes the interactions among the absorption and scattering by clouds, aerosols, molecules and surface. Finally, fluxes are integrated over the entire longwave spectrum.In this paper, we present our results of optimizing the Goddard longwave radiative transfer scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The optimizations improved the performance of the original Goddard longwave radiative transfer scheme on Xeon Phi 7120P by a factor of 2.2x. Furthermore, the same optimizations improved the performance of the Goddard longwave radiative transfer scheme on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 2.1x compared to the original Goddard longwave radiative transfer scheme code.

  18. AnRAD: A Neuromorphic Anomaly Detection Framework for Massive Concurrent Data Streams.

    PubMed

    Chen, Qiuwen; Luley, Ryan; Wu, Qing; Bishop, Morgan; Linderman, Richard W; Qiu, Qinru

    2018-05-01

    The evolution of high performance computing technologies has enabled the large-scale implementation of neuromorphic models and pushed the research in computational intelligence into a new era. Among the machine learning applications, unsupervised detection of anomalous streams is especially challenging due to the requirements of detection accuracy and real-time performance. Designing a computing framework that harnesses the growing computing power of the multicore systems while maintaining high sensitivity and specificity to the anomalies is an urgent research topic. In this paper, we propose anomaly recognition and detection (AnRAD), a bioinspired detection framework that performs probabilistic inferences. We analyze the feature dependency and develop a self-structuring method that learns an efficient confabulation network using unlabeled data. This network is capable of fast incremental learning, which continuously refines the knowledge base using streaming data. Compared with several existing anomaly detection approaches, our method provides competitive detection quality. Furthermore, we exploit the massive parallel structure of the AnRAD framework. Our implementations of the detection algorithm on the graphic processing unit and the Xeon Phi coprocessor both obtain substantial speedups over the sequential implementation on general-purpose microprocessor. The framework provides real-time service to concurrent data streams within diversified knowledge contexts, and can be applied to large problems with multiple local patterns. Experimental results demonstrate high computing performance and memory efficiency. For vehicle behavior detection, the framework is able to monitor up to 16000 vehicles (data streams) and their interactions in real time with a single commodity coprocessor, and uses less than 0.2 ms for one testing subject. Finally, the detection network is ported to our spiking neural network simulator to show the potential of adapting to the emerging neuromorphic architectures.

  19. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    PubMed

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  20. Fast Fourier Transform Co-Processor (FFTC)- Towards Embedded GFLOPs

    NASA Astrophysics Data System (ADS)

    Kuehl, Christopher; Liebstueckel, Uwe; Tejerina, Isaac; Uemminghaus, Michael; Wite, Felix; Kolb, Michael; Suess, Martin; Weigand, Roland

    2012-08-01

    Many signal processing applications and algorithms perform their operations on the data in the transform domain to gain efficiency. The Fourier Transform Co- Processor has been developed with the aim to offload General Purpose Processors from performing these transformations and therefore to boast the overall performance of a processing module. The IP of the commercial PowerFFT processor has been selected and adapted to meet the constraints of the space environment.In frame of the ESA activity “Fast Fourier Transform DSP Co-processor (FFTC)” (ESTEC/Contract No. 15314/07/NL/LvH/ma) the objectives were the following:Production of prototypes of a space qualified version of the commercial PowerFFT chip called FFTC based on the PowerFFT IP.The development of a stand-alone FFTC Accelerator Board (FTAB) based on the FFTC including the Controller FPGA and SpaceWire Interfaces to verify the FFTC function and performance.The FFTC chip performs its calculations with floating point precision. Stand alone it is capable computing FFTs of up to 1K complex samples in length in only 10μsec. This corresponds to an equivalent processing performance of 4.7 GFlops. In this mode the maximum sustained data throughput reaches 6.4Gbit/s. When connected to up to 4 EDAC protected SDRAM memory banks the FFTC can perform long FFTs with up to 1M complex samples in length or multidimensional FFT- based processing tasks.A Controller FPGA on the FTAB takes care of the SDRAM addressing. The instructions commanded via the Controller FPGA are used to set up the data flow and generate the memory addresses.The presentation will give and overview on the project, including the results of the validation of the FFTC ASIC prototypes.

  1. P-Hint-Hunt: a deep parallelized whole genome DNA methylation detection tool.

    PubMed

    Peng, Shaoliang; Yang, Shunyun; Gao, Ming; Liao, Xiangke; Liu, Jie; Yang, Canqun; Wu, Chengkun; Yu, Wenqiang

    2017-03-14

    The increasing studies have been conducted using whole genome DNA methylation detection as one of the most important part of epigenetics research to find the significant relationships among DNA methylation and several typical diseases, such as cancers and diabetes. In many of those studies, mapping the bisulfite treated sequence to the whole genome has been the main method to study DNA cytosine methylation. However, today's relative tools almost suffer from inaccuracies and time-consuming problems. In our study, we designed a new DNA methylation prediction tool ("Hint-Hunt") to solve the problem. By having an optimal complex alignment computation and Smith-Waterman matrix dynamic programming, Hint-Hunt could analyze and predict the DNA methylation status. But when Hint-Hunt tried to predict DNA methylation status with large-scale dataset, there are still slow speed and low temporal-spatial efficiency problems. In order to solve the problems of Smith-Waterman dynamic programming and low temporal-spatial efficiency, we further design a deep parallelized whole genome DNA methylation detection tool ("P-Hint-Hunt") on Tianhe-2 (TH-2) supercomputer. To the best of our knowledge, P-Hint-Hunt is the first parallel DNA methylation detection tool with a high speed-up to process large-scale dataset, and could run both on CPU and Intel Xeon Phi coprocessors. Moreover, we deploy and evaluate Hint-Hunt and P-Hint-Hunt on TH-2 supercomputer in different scales. The experimental results illuminate our tools eliminate the deviation caused by bisulfite treatment in mapping procedure and the multi-level parallel program yields a 48 times speed-up with 64 threads. P-Hint-Hunt gain a deep acceleration on CPU and Intel Xeon Phi heterogeneous platform, which gives full play of the advantages of multi-cores (CPU) and many-cores (Phi).

  2. Does the Intel Xeon Phi processor fit HEP workloads?

    NASA Astrophysics Data System (ADS)

    Nowak, A.; Bitzes, G.; Dotti, A.; Lazzaro, A.; Jarp, S.; Szostek, P.; Valsan, L.; Botezatu, M.; Leduc, J.

    2014-06-01

    This paper summarizes the five years of CERN openlab's efforts focused on the Intel Xeon Phi co-processor, from the time of its inception to public release. We consider the architecture of the device vis a vis the characteristics of HEP software and identify key opportunities for HEP processing, as well as scaling limitations. We report on improvements and speedups linked to parallelization and vectorization on benchmarks involving software frameworks such as Geant4 and ROOT. Finally, we extrapolate current software and hardware trends and project them onto accelerators of the future, with the specifics of offline and online HEP processing in mind.

  3. A floating-point/multiple-precision processor for airborne applications

    NASA Technical Reports Server (NTRS)

    Yee, R.

    1982-01-01

    A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.

  4. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  5. Efficient Implementation of MrBayes on Multi-GPU

    PubMed Central

    Zhou, Jianfu; Liu, Xiaoguang; Wang, Gang

    2013-01-01

    MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)3), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)3 Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)3 (aMCMCMC) for MrBayes (MC)3 on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new “node-by-node” task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)3 achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)3 is dramatically faster than all the previous (MC)3 algorithms and scales well to large GPU clusters. PMID:23493260

  6. Efficient implementation of MrBayes on multi-GPU.

    PubMed

    Bao, Jie; Xia, Hongju; Zhou, Jianfu; Liu, Xiaoguang; Wang, Gang

    2013-06-01

    MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)(3)), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)(3) Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)(3) (aMCMCMC) for MrBayes (MC)(3) on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new "node-by-node" task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)(3) achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)(3) is dramatically faster than all the previous (MC)(3) algorithms and scales well to large GPU clusters.

  7. Communication and control in an integrated manufacturing system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Throne, Robert D.; Muthuswamy, Yogesh K.

    1987-01-01

    Typically, components in a manufacturing system are all centrally controlled. Due to possible communication bottlenecking, unreliability, and inflexibility caused by using a centralized controller, a new concept of system integration called an Integrated Multi-Robot System (IMRS) was developed. The IMRS can be viewed as a distributed real time system. Some of the current research issues being examined to extend the framework of the IMRS to meet its performance goals are presented. These issues include the use of communication coprocessors to enhance performance, the distribution of tasks and the methods of providing fault tolerance in the IMRS. An application example of real time collision detection, as it relates to the IMRS concept, is also presented and discussed.

  8. Performance of GeantV EM Physics Models

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  9. Electromagnetic Physics Models for Parallel Computing Architectures

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  10. SAMIS- STANDARD ASSEMBLY-LINE MANUFACTURING INDUSTRY SIMULATION

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1994-01-01

    The Standard Assembly-Line Manufacturing Industry Simulation (SAMIS) program was originally developed to model a hypothetical U. S. industry which manufactures silicon solar modules for use in electricity generation. The SAMIS program has now been generalized to the extent that it should be useful for simulating many different production-line manufacturing industries and companies. The most important capability of SAMIS is its ability to "simulate" an industry based on a model developed by the user with the aid of the SAMIS program. The results of the simulation are a set of financial reports which detail the requirements, including quantities and cost, of the companies and processes which comprise the industry. SAMIS provides a fair, consistent, and reliable means of comparing manufacturing processes being developed by numerous independent efforts. It can also be used to assess the industry-wide impact of changes in financial parameters, such as cost of resources and services, inflation rates, interest rates, tax policies, and required return on equity. Because of the large amount of data needed to describe an industry, a major portion of SAMIS is dedicated to data entry and maintenance. This activity in SAMIS is referred to as model management. Model management requires a significant amount of interaction through a system of "prompts" which make it possible for persons not familiar with computers, or the SAMIS program, to provide all of the data necessary to perform a simulation. SAMIS is written in TURBO PASCAL (version 2.0 required for compilation) and requires 10 meg of hard disk space, an 8087 coprocessor, and an IBM color graphics monitor. Executables and source code are provided. SAMIS was originally developed in 1978; the IBM PC version was developed in 1985. Release 6.1 was made available in 1986, and includes the PC-IPEG program.

  11. Development of seismic tomography software for hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.

  12. Intel Xeon Phi accelerated Weather Research and Forecasting (WRF) Goddard microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, J.; Huang, B.; Huang, A. H.-L.

    2014-12-01

    The Weather Research and Forecasting (WRF) model is a numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs. The WRF development is a done in collaboration around the globe. Furthermore, the WRF is used by academic atmospheric scientists, weather forecasters at the operational centers and so on. The WRF contains several physics components. The most time consuming one is the microphysics. One microphysics scheme is the Goddard cloud microphysics scheme. It is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the Goddard scheme code. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU does. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is one familiar to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discussed in this paper. The results show that the optimizations improved performance of Goddard microphysics scheme on Xeon Phi 7120P by a factor of 4.7×. In addition, the optimizations reduced the Goddard microphysics scheme's share of the total WRF processing time from 20.0 to 7.5%. Furthermore, the same optimizations improved performance on Intel Xeon E5-2670 by a factor of 2.8× compared to the original code.

  13. Separable projection integrals for higher-order correlators of the cosmic microwave sky: Acceleration by factors exceeding 100

    NASA Astrophysics Data System (ADS)

    Briggs, J. P.; Pennycook, S. J.; Fergusson, J. R.; Jäykkä, J.; Shellard, E. P. S.

    2016-04-01

    We present a case study describing efforts to optimise and modernise "Modal", the simulation and analysis pipeline used by the Planck satellite experiment for constraining general non-Gaussian models of the early universe via the bispectrum (or three-point correlator) of the cosmic microwave background radiation. We focus on one particular element of the code: the projection of bispectra from the end of inflation to the spherical shell at decoupling, which defines the CMB we observe today. This code involves a three-dimensional inner product between two functions, one of which requires an integral, on a non-rectangular domain containing a sparse grid. We show that by employing separable methods this calculation can be reduced to a one-dimensional summation plus two integrations, reducing the overall dimensionality from four to three. The introduction of separable functions also solves the issue of the non-rectangular sparse grid. This separable method can become unstable in certain scenarios and so the slower non-separable integral must be calculated instead. We present a discussion of the optimisation of both approaches. We demonstrate significant speed-ups of ≈100×, arising from a combination of algorithmic improvements and architecture-aware optimisations targeted at improving thread and vectorisation behaviour. The resulting MPI/OpenMP hybrid code is capable of executing on clusters containing processors and/or coprocessors, with strong-scaling efficiency of 98.6% on up to 16 nodes. We find that a single coprocessor outperforms two processor sockets by a factor of 1.3× and that running the same code across a combination of both microarchitectures improves performance-per-node by a factor of 3.38×. By making bispectrum calculations competitive with those for the power spectrum (or two-point correlator) we are now able to consider joint analysis for cosmological science exploitation of new data.

  14. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond

    2016-04-15

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less

  15. [Hardware for graphics systems].

    PubMed

    Goetz, C

    1991-02-01

    In all personal computer applications, be it for private or professional use, the decision of which "brand" of computer to buy is of central importance. In the USA Apple computers are mainly used in universities, while in Europe computers of the so-called "industry standard" by IBM (or clones thereof) have been increasingly used for many years. Independently of any brand name considerations, the computer components purchased must meet the current (and projected) needs of the user. Graphic capabilities and standards, processor speed, the use of co-processors, as well as input and output devices such as "mouse", printers and scanners are discussed. This overview is meant to serve as a decision aid. Potential users are given a short but detailed summary of current technical features.

  16. Electromagnetic physics models for parallel computing architectures

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less

  17. Mold heating and cooling microprocessor conversion. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, D.P.

    Conversion of the microprocessors and software for the Mold Heating and Cooling (MHAC) pump package control systems was initiated to allow required system enhancements and provide data communications capabilities with the Plastics Information and Control System (PICS). The existing microprocessor-based control systems for the pump packages use an Intel 8088-based microprocessor board with a maximum of 64 Kbytes of program memory. The requirements for the system conversion were developed, and hardware has been selected to allow maximum reuse of existing hardware and software while providing the required additional capabilities and capacity. The new hardware will incorporate an Intel 80286-based microprocessormore » board with an 80287 math coprocessor, the system includes additional memory, I/O, and RS232 communication ports.« less

  18. First experience of vectorizing electromagnetic physics models for detector simulation

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; de Fine Licht, J.; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.

    2015-12-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  19. The Software Design for the Wide-Field Infrared Explorer Attitude Control System

    NASA Technical Reports Server (NTRS)

    Anderson, Mark O.; Barnes, Kenneth C.; Melhorn, Charles M.; Phillips, Tom

    1998-01-01

    The Wide-Field Infrared Explorer (WIRE), currently scheduled for launch in September 1998, is the fifth of five spacecraft in the NASA/Goddard Small Explorer (SMEX) series. This paper presents the design of WIRE's Attitude Control System flight software (ACS FSW). WIRE is a momentum-biased, three-axis stabilized stellar pointer which provides high-accuracy pointing and autonomous acquisition for eight to ten stellar targets per orbit. WIRE's short mission life and limited cryogen supply motivate requirements for Sun and Earth avoidance constraints which are designed to prevent catastrophic instrument damage and to minimize the heat load on the cryostat. The FSW implements autonomous fault detection and handling (FDH) to enforce these instrument constraints and to perform several other checks which insure the safety of the spacecraft. The ACS FSW implements modules for sensor data processing, attitude determination, attitude control, guide star acquisition, actuator command generation, command/telemetry processing, and FDH. These software components are integrated with a hierarchical control mode managing module that dictates which software components are currently active. The lowest mode in the hierarchy is the 'safest' one, in the sense that it utilizes a minimal complement of sensors and actuators to keep the spacecraft in a stable configuration (power and pointing constraints are maintained). As higher modes in the hierarchy are achieved, the various software functions are activated by the mode manager, and an increasing level of attitude control accuracy is provided. If FDH detects a constraint violation or other anomaly, it triggers a safing transition to a lower control mode. The WIRE ACS FSW satisfies all target acquisition and pointing accuracy requirements, enforces all pointing constraints, provides the ground with a simple means for reconfiguring the system via table load, and meets all the demands of its real-time embedded environment (16 MHz Intel 80386 processor with 80387 coprocessor running under the VRTX operating system). The mode manager organizes and controls all the software modules used to accomplish these goals, and in particular, the FDH module is tightly coupled with the mode manager.

  20. MIC-SVM: Designing A Highly Efficient Support Vector Machine For Advanced Modern Multi-Core and Many-Core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Song, Shuaiwen; Fu, Haohuan

    2014-08-16

    Support Vector Machine (SVM) has been widely used in data-mining and Big Data applications as modern commercial databases start to attach an increasing importance to the analytic capabilities. In recent years, SVM was adapted to the field of High Performance Computing for power/performance prediction, auto-tuning, and runtime scheduling. However, even at the risk of losing prediction accuracy due to insufficient runtime information, researchers can only afford to apply offline model training to avoid significant runtime training overhead. To address the challenges above, we designed and implemented MICSVM, a highly efficient parallel SVM for x86 based multi-core and many core architectures,more » such as the Intel Ivy Bridge CPUs and Intel Xeon Phi coprocessor (MIC).« less

  1. FPGA-based coprocessor for matrix algorithms implementation

    NASA Astrophysics Data System (ADS)

    Amira, Abbes; Bensaali, Faycal

    2003-03-01

    Matrix algorithms are important in many types of applications including image and signal processing. These areas require enormous computing power. A close examination of the algorithms used in these, and related, applications reveals that many of the fundamental actions involve matrix operations such as matrix multiplication which is of O (N3) on a sequential computer and O (N3/p) on a parallel system with p processors complexity. This paper presents an investigation into the design and implementation of different matrix algorithms such as matrix operations, matrix transforms and matrix decompositions using an FPGA based environment. Solutions for the problem of processing large matrices have been proposed. The proposed system architectures are scalable, modular and require less area and time complexity with reduced latency when compared with existing structures.

  2. PyPWA: A partial-wave/amplitude analysis software framework

    NASA Astrophysics Data System (ADS)

    Salgado, Carlos

    2016-05-01

    The PyPWA project aims to develop a software framework for Partial Wave and Amplitude Analysis of data; providing the user with software tools to identify resonances from multi-particle final states in photoproduction. Most of the code is written in Python. The software is divided into two main branches: one general-shell where amplitude's parameters (or any parametric model) are to be estimated from the data. This branch also includes software to produce simulated data-sets using the fitted amplitudes. A second branch contains a specific realization of the isobar model (with room to include Deck-type and other isobar model extensions) to perform PWA with an interface into the computer resources at Jefferson Lab. We are currently implementing parallelism and vectorization using the Intel's Xeon Phi family of coprocessors.

  3. Tempest: Accelerated MS/MS database search software for heterogeneous computing platforms

    PubMed Central

    Adamo, Mark E.; Gerber, Scott A.

    2017-01-01

    MS/MS database search algorithms derive a set of candidate peptide sequences from in-silico digest of a protein sequence database, and compute theoretical fragmentation patterns to match these candidates against observed MS/MS spectra. The original Tempest publication described these operations mapped to a CPU-GPU model, in which the CPU generates peptide candidates that are asynchronously sent to a discrete GPU to be scored against experimental spectra in parallel (Milloy et al., 2012). The current version of Tempest expands this model, incorporating OpenCL to offer seamless parallelization across multicore CPUs, GPUs, integrated graphics chips, and general-purpose coprocessors. Three protocols describe how to configure and run a Tempest search, including discussion of how to leverage Tempest's unique feature set to produce optimal results. PMID:27603022

  4. SIMPLIFIED CALCULATION OF SOLAR FLUX ON THE SIDE WALL OF CYLINDRICAL CAVITY SOLAR RECEIVERS

    NASA Technical Reports Server (NTRS)

    Bhandari, P.

    1994-01-01

    The Simplified Calculation of Solar Flux Distribution on the Side Wall of Cylindrical Cavity Solar Receivers program employs a simple solar flux calculation algorithm for a cylindrical cavity type solar receiver. Applications of this program include the study of solar energy, heat transfer, and space power-solar dynamics engineering. The aperture plate of the receiver is assumed to be located in the focal plane of a paraboloidal concentrator, and the geometry is assumed to be axisymmetric. The concentrator slope error is assumed to be the only surface error; it is assumed that there are no pointing or misalignment errors. Using cone optics, the contour error method is utilized to handle the slope error of the concentrator. The flux distribution on the side wall is calculated by integration of the energy incident from cones emanating from all the differential elements on the concentrator. The calculations are done for any set of dimensions and properties of the receiver and the concentrator, and account for any spillover on the aperture plate. The results of this algorithm compared excellently with those predicted by more complicated programs. Because of the utilization of axial symmetry and overall simplification, it is extremely fast. It can be easily extended to other axi-symmetric receiver geometries. The program was written in Fortran 77, compiled using a Ryan McFarland compiler, and run on an IBM PC-AT with a math coprocessor. It requires 60K of memory and has been implemented under MS-DOS 3.2.1. The program was developed in 1988.

  5. Sustainability through Dynamic Energy Management - Continuum Magazine |

    Science.gov Websites

    NREL Sustainability through Dynamic Energy Management Sustainability through Dynamic Energy Management Integrating behavior change with advanced building systems is the new model in energy efficiency , it's necessary to integrate dynamic energy management with occupant behavior change. As plans were

  6. Evaluating Multi-core Architectures through Accelerating the Three-Dimensional Lax–Wendroff Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Fu, Haohuan; Song, Shuaiwen

    2014-07-18

    Wave propagation forward modeling is a widely used computational method in oil and gas exploration. The iterative stencil loops in such problems have broad applications in scientific computing. However, executing such loops can be highly time time-consuming, which greatly limits application’s performance and power efficiency. In this paper, we accelerate the forward modeling technique on the latest multi-core and many-core architectures such as Intel Sandy Bridge CPUs, NVIDIA Fermi C2070 GPU, NVIDIA Kepler K20x GPU, and the Intel Xeon Phi Co-processor. For the GPU platforms, we propose two parallel strategies to explore the performance optimization opportunities for our stencil kernels.more » For Sandy Bridge CPUs and MIC, we also employ various optimization techniques in order to achieve the best.« less

  7. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  8. Evaluation of the Xeon phi processor as a technology for the acceleration of real-time control in high-order adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine

    2014-08-01

    We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.

  9. A distributed microcomputer-controlled system for data acquisition and power spectral analysis of EEG.

    PubMed

    Vo, T D; Dwyer, G; Szeto, H H

    1986-04-01

    A relatively powerful and inexpensive microcomputer-based system for the spectral analysis of the EEG is presented. High resolution and speed is achieved with the use of recently available large-scale integrated circuit technology with enhanced functionality (INTEL Math co-processors 8087) which can perform transcendental functions rapidly. The versatility of the system is achieved with a hardware organization that has distributed data acquisition capability performed by the use of a microprocessor-based analog to digital converter with large resident memory (Cyborg ISAAC-2000). Compiled BASIC programs and assembly language subroutines perform on-line or off-line the fast Fourier transform and spectral analysis of the EEG which is stored as soft as well as hard copy. Some results obtained from test application of the entire system in animal studies are presented.

  10. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  11. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    NASA Astrophysics Data System (ADS)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  12. Parallel tiled Nussinov RNA folding loop nest generated using both dependence graph transitive closure and loop skewing.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2017-06-02

    RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.

  13. OSCAR: A Compact, Powerful and Versatile On Board Computer Based on LEON3 Core

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Lefevre, Aurelien; Koebel, Franck

    2011-08-01

    Satellites are controlled via a platform On Board Computer (OBC) that manages different parameters (attitude, orbit, modes, temperatures ...) with respect to its payload mission (telecommunication, earth observation, scientific mission). The platform OBC is connected to the satellite and the ground control via digital links, and executes on board software.The main functions of a platform OBC are to provide the satellite flight segment with the following features: o Processing resources for the flight mission software o TM/TC services and interfaces with the RF communication chaino General communication services with the Avionicsand payload equipments through an on-board communication bus based on the MIL-1553B standard or CANo Time synchronization and distributiono Failure tolerant architecture based on the use of redounded reconfiguration units and redundancyimplementationFrom a hardware point of view, it groups a lot of digital functions usually dispatched on numerous chips (processor, co-processor, digital links IP ...) together. In order to reach an ultimate level of integration, Astrium has designed an ASIC gathering on a single chip all the required digital functions: the SCOC3 ASIC.Astrium has developed an OBC based on this SCOC3 ASIC: the OSCAR (Optimized Spacecraft Computer Architecture with Reconfiguration). It is now available off-the-shelf as the new OBC product family of Astrium.This paper presents the major innovations introduced by Astrium for SCOC3 and OSCAR with the objective to save cost and mass through a solution compatible with any class quality project, using a unique software development environment for user.

  14. On-board attitude determination for the Explorer Platform satellite

    NASA Technical Reports Server (NTRS)

    Jayaraman, C.; Class, B.

    1992-01-01

    This paper describes the attitude determination algorithm for the Explorer Platform satellite. The algorithm, which is baselined on the Landsat code, is a six-element linear quadratic state estimation processor, in the form of a Kalman filter augmented by an adaptive filter process. Improvements to the original Landsat algorithm were required to meet mission pointing requirements. These consisted of a more efficient sensor processing algorithm and the addition of an adaptive filter which acts as a check on the Kalman filter during satellite slew maneuvers. A 1750A processor will be flown on board the satellite for the first time as a coprocessor (COP) in addition to the NASA Standard Spacecraft Computer. The attitude determination algorithm, which will be resident in the COP's memory, will make full use of its improved processing capabilities to meet mission requirements. Additional benefits were gained by writing the attitude determination code in Ada.

  15. Wireless augmented reality communication system

    NASA Technical Reports Server (NTRS)

    Devereaux, Ann (Inventor); Agan, Martin (Inventor); Jedrey, Thomas (Inventor)

    2006-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  16. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  17. An optical Fourier transform coprocessor with direct phase determination.

    PubMed

    Macfaden, Alexander J; Gordon, George S D; Wilkinson, Timothy D

    2017-10-20

    The Fourier transform is a ubiquitous mathematical operation which arises naturally in optics. We propose and demonstrate a practical method to optically evaluate a complex-to-complex discrete Fourier transform. By implementing the Fourier transform optically we can overcome the limiting O(nlogn) complexity of fast Fourier transform algorithms. Efficiently extracting the phase from the well-known optical Fourier transform is challenging. By appropriately decomposing the input and exploiting symmetries of the Fourier transform we are able to determine the phase directly from straightforward intensity measurements, creating an optical Fourier transform with O(n) apparent complexity. Performing larger optical Fourier transforms requires higher resolution spatial light modulators, but the execution time remains unchanged. This method could unlock the potential of the optical Fourier transform to permit 2D complex-to-complex discrete Fourier transforms with a performance that is currently untenable, with applications across information processing and computational physics.

  18. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    PubMed

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  19. A GaAs vector processor based on parallel RISC microprocessors

    NASA Astrophysics Data System (ADS)

    Misko, Tim A.; Rasset, Terry L.

    A vector processor architecture based on the development of a 32-bit microprocessor using gallium arsenide (GaAs) technology has been developed. The McDonnell Douglas vector processor (MVP) will be fabricated completely from GaAs digital integrated circuits. The MVP architecture includes a vector memory of 1 megabyte, a parallel bus architecture with eight processing elements connected in parallel, and a control processor. The processing elements consist of a reduced instruction set CPU (RISC) with four floating-point coprocessor units and necessary memory interface functions. This architecture has been simulated for several benchmark programs including complex fast Fourier transform (FFT), complex inner product, trigonometric functions, and sort-merge routine. The results of this study indicate that the MVP can process a 1024-point complex FFT at a speed of 112 microsec (389 megaflops) while consuming approximately 618 W of power in a volume of approximately 0.1 ft-cubed.

  20. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas (Inventor); Agan, Martin (Inventor); Devereaux, Ann (Inventor)

    2014-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  1. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Agan, Martin (Inventor); Devereaux, Ann (Inventor); Jedrey, Thomas (Inventor)

    2016-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  2. Non-Boolean computing with nanomagnets for computer vision applications

    NASA Astrophysics Data System (ADS)

    Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  3. Tempest: Accelerated MS/MS Database Search Software for Heterogeneous Computing Platforms.

    PubMed

    Adamo, Mark E; Gerber, Scott A

    2016-09-07

    MS/MS database search algorithms derive a set of candidate peptide sequences from in silico digest of a protein sequence database, and compute theoretical fragmentation patterns to match these candidates against observed MS/MS spectra. The original Tempest publication described these operations mapped to a CPU-GPU model, in which the CPU (central processing unit) generates peptide candidates that are asynchronously sent to a discrete GPU (graphics processing unit) to be scored against experimental spectra in parallel. The current version of Tempest expands this model, incorporating OpenCL to offer seamless parallelization across multicore CPUs, GPUs, integrated graphics chips, and general-purpose coprocessors. Three protocols describe how to configure and run a Tempest search, including discussion of how to leverage Tempest's unique feature set to produce optimal results. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  4. Dynamic ocean management increases the efficiency and efficacy of fisheries management.

    PubMed

    Dunn, Daniel C; Maxwell, Sara M; Boustany, Andre M; Halpin, Patrick N

    2016-01-19

    In response to the inherent dynamic nature of the oceans and continuing difficulty in managing ecosystem impacts of fisheries, interest in the concept of dynamic ocean management, or real-time management of ocean resources, has accelerated in the last several years. However, scientists have yet to quantitatively assess the efficiency of dynamic management over static management. Of particular interest is how scale influences effectiveness, both in terms of how it reflects underlying ecological processes and how this relates to potential efficiency gains. Here, we address the empirical evidence gap and further the ecological theory underpinning dynamic management. We illustrate, through the simulation of closures across a range of spatiotemporal scales, that dynamic ocean management can address previously intractable problems at scales associated with coactive and social patterns (e.g., competition, predation, niche partitioning, parasitism, and social aggregations). Furthermore, it can significantly improve the efficiency of management: as the resolution of the closures used increases (i.e., as the closures become more targeted), the percentage of target catch forgone or displaced decreases, the reduction ratio (bycatch/catch) increases, and the total time-area required to achieve the desired bycatch reduction decreases. In the scenario examined, coarser scale management measures (annual time-area closures and monthly full-fishery closures) would displace up to four to five times the target catch and require 100-200 times more square kilometer-days of closure than dynamic measures (grid-based closures and move-on rules). To achieve similar reductions in juvenile bycatch, the fishery would forgo or displace between USD 15-52 million in landings using a static approach over a dynamic management approach.

  5. Dynamic Ocean Management Increases the Efficiency and Efficacy of Fisheries Management

    NASA Astrophysics Data System (ADS)

    Dunn, D. C.; Maxwell, S.; Boustany, A. M.; Halpin, P. N.

    2016-12-01

    In response to the inherent dynamic nature of the oceans and continuing difficulty in managing ecosystem impacts of fisheries, interest in the concept of dynamic ocean management, or real-time management of ocean resources, has accelerated in the last several years. However, scientists have yet to quantitatively assess the efficiency of dynamic management over static management. Of particular interest is how scale influences effectiveness, both in terms of how it reflects underlying ecological processes and how this relates to potential efficiency gains. In this presentation, we attempt to address both the empirical evidence gap and further the ecological theory underpinning dynamic management. We illustrate, through the simulation of closures across a range of spatiotemporal scales, that dynamic ocean management can address previously intractable problems at scales associated with coactive and social patterns (e.g., competition, predation, niche partitioning, parasitism and social aggregations). Further, it can significantly improve the efficiency of management: as the resolution of the individual closures used increases (i.e., as the closures become more targeted) the percent of target catch forgone or displaced decreases, the reduction ratio (bycatch/catch) increases, and the total time-area required to achieve the desired bycatch reduction decreases. The coarser management measures (annual time-area closures and monthly full fishery closures) affected up to 4-5x the target catch and required 100-200x the time-area of the dynamic measures (grid-based closures and move-on rules). To achieve similar reductions in juvenile bycatch, the fishery would forgo or displace between USD 15-52 million in landings using a static approach over a dynamic management approach.

  6. Dynamic Trust Management for Mobile Networks and Its Applications

    ERIC Educational Resources Information Center

    Bao, Fenye

    2013-01-01

    Trust management in mobile networks is challenging due to dynamically changing network environments and the lack of a centralized trusted authority. In this dissertation research, we "design" and "validate" a class of dynamic trust management protocols for mobile networks, and demonstrate the utility of dynamic trust management…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  8. Holo-Chidi video concentrator card

    NASA Astrophysics Data System (ADS)

    Nwodoh, Thomas A.; Prabhakar, Aditya; Benton, Stephen A.

    2001-12-01

    The Holo-Chidi Video Concentrator Card is a frame buffer for the Holo-Chidi holographic video processing system. Holo- Chidi is designed at the MIT Media Laboratory for real-time computation of computer generated holograms and the subsequent display of the holograms at video frame rates. The Holo-Chidi system is made of two sets of cards - the set of Processor cards and the set of Video Concentrator Cards (VCCs). The Processor cards are used for hologram computation, data archival/retrieval from a host system, and for higher-level control of the VCCs. The VCC formats computed holographic data from multiple hologram computing Processor cards, converting the digital data to analog form to feed the acousto-optic-modulators of the Media lab's Mark-II holographic display system. The Video Concentrator card is made of: a High-Speed I/O (HSIO) interface whence data is transferred from the hologram computing Processor cards, a set of FIFOs and video RAM used as buffer for data for the hololines being displayed, a one-chip integrated microprocessor and peripheral combination that handles communication with other VCCs and furnishes the card with a USB port, a co-processor which controls display data formatting, and D-to-A converters that convert digital fringes to analog form. The co-processor is implemented with an SRAM-based FPGA with over 500,000 gates and controls all the signals needed to format the data from the multiple Processor cards into the format required by Mark-II. A VCC has three HSIO ports through which up to 500 Megabytes of computed holographic data can flow from the Processor Cards to the VCC per second. A Holo-Chidi system with three VCCs has enough frame buffering capacity to hold up to thirty two 36Megabyte hologram frames at a time. Pre-computed holograms may also be loaded into the VCC from a host computer through the low- speed USB port. Both the microprocessor and the co- processor in the VCC can access the main system memory used to store control programs and data for the VCC. The Card also generates the control signals used by the scanning mirrors of Mark-II. In this paper we discuss the design of the VCC and its implementation in the Holo-Chidi system.

  9. Extending the BEAGLE library to a multi-FPGA platform.

    PubMed

    Jin, Zheming; Bakos, Jason D

    2013-01-19

    Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.

  10. Operationalizing Dynamic Ocean Management (DOM): Understanding the Incentive Structure, Policy and Regulatory Context for DOM in Practice

    NASA Astrophysics Data System (ADS)

    Lewison, R. L.; Saumweber, W. J.; Erickson, A.; Martone, R. G.

    2016-12-01

    Dynamic ocean management, or management that uses near real-time data to guide the spatial distribution of commercial activities, is an emerging approach to balance ocean resource use and conservation. Employing a wide range of data types, dynamic ocean management in a fisheries context can be used to meet multiple objectives - managing target quota, bycatch reduction, and reducing interactions with species of conservation concern. There is a growing list of DOM applications currently in practice in fisheries around the world, yet the approach is new enough that both fishers and fisheries managers are unclear how DOM can be applied to their fishery. Here, we use the experience from dynamic ocean management applications that are currently in practice to address the commonly asked question "How can dynamic management approaches be implemented in a traditionally managed fishery?". Combining knowledge from the DOM participants with a review of regulatory frameworks and incentive structures, stakeholder participation, and technological requirements of DOM in practice, we identify ingredients that have supported successful implementation of this new management approach.

  11. 77 FR 19408 - Dynamic Mobility Applications and Data Capture Management Programs; Notice of Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-30

    ... DEPARTMENT OF TRANSPORTATION Dynamic Mobility Applications and Data Capture Management Programs...) Intelligent Transportation System Joint Program Office (ITS JPO) will host a free public meeting to provide stakeholders an update on the Data Capture and Management (DCM) and Dynamic Mobility Applications (DMA...

  12. NASA Center for Climate Simulation (NCCS) Presentation

    NASA Technical Reports Server (NTRS)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  13. High-Performance Data Analysis Tools for Sun-Earth Connection Missions

    NASA Technical Reports Server (NTRS)

    Messmer, Peter

    2011-01-01

    The data analysis tool of choice for many Sun-Earth Connection missions is the Interactive Data Language (IDL) by ITT VIS. The increasing amount of data produced by these missions and the increasing complexity of image processing algorithms requires access to higher computing power. Parallel computing is a cost-effective way to increase the speed of computation, but algorithms oftentimes have to be modified to take advantage of parallel systems. Enhancing IDL to work on clusters gives scientists access to increased performance in a familiar programming environment. The goal of this project was to enable IDL applications to benefit from both computing clusters as well as graphics processing units (GPUs) for accelerating data analysis tasks. The tool suite developed in this project enables scientists now to solve demanding data analysis problems in IDL that previously required specialized software, and it allows them to be solved orders of magnitude faster than on conventional PCs. The tool suite consists of three components: (1) TaskDL, a software tool that simplifies the creation and management of task farms, collections of tasks that can be processed independently and require only small amounts of data communication; (2) mpiDL, a tool that allows IDL developers to use the Message Passing Interface (MPI) inside IDL for problems that require large amounts of data to be exchanged among multiple processors; and (3) GPULib, a tool that simplifies the use of GPUs as mathematical coprocessors from within IDL. mpiDL is unique in its support for the full MPI standard and its support of a broad range of MPI implementations. GPULib is unique in enabling users to take advantage of an inexpensive piece of hardware, possibly already installed in their computer, and achieve orders of magnitude faster execution time for numerically complex algorithms. TaskDL enables the simple setup and management of task farms on compute clusters. The products developed in this project have the potential to interact, so one can build a cluster of PCs, each equipped with a GPU, and use mpiDL to communicate between the nodes and GPULib to accelerate the computations on each node.

  14. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety ofmore » applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.« less

  15. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE PAGES

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric; ...

    2017-03-06

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety ofmore » applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.« less

  16. A comparison of SuperLU solvers on the intel MIC architecture

    NASA Astrophysics Data System (ADS)

    Tuncel, Mehmet; Duran, Ahmet; Celebi, M. Serdar; Akaydin, Bora; Topkaya, Figen O.

    2016-10-01

    In many science and engineering applications, problems may result in solving a sparse linear system AX=B. For example, SuperLU_MCDT, a linear solver, was used for the large penta-diagonal matrices for 2D problems and hepta-diagonal matrices for 3D problems, coming from the incompressible blood flow simulation (see [1]). It is important to test the status and potential improvements of state-of-the-art solvers on new technologies. In this work, sequential, multithreaded and distributed versions of SuperLU solvers (see [2]) are examined on the Intel Xeon Phi coprocessors using offload programming model at the EURORA cluster of CINECA in Italy. We consider a portfolio of test matrices containing patterned matrices from UFMM ([3]) and randomly located matrices. This architecture can benefit from high parallelism and large vectors. We find that the sequential SuperLU benefited up to 45 % performance improvement from the offload programming depending on the sparse matrix type and the size of transferred and processed data.

  17. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. Bamboo reformulates MPI source into the form of a task dependency graph that expresses a partial ordering among tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotationmore » for a variety of applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo's performance meets or exceeds that of labor-intensive hand coding. The translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a wellknown library.« less

  18. Automated Software Acceleration in Programmable Logic for an Efficient NFFT Algorithm Implementation: A Case Study.

    PubMed

    Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian

    2017-03-28

    Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.

  19. Automated Software Acceleration in Programmable Logic for an Efficient NFFT Algorithm Implementation: A Case Study

    PubMed Central

    Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian

    2017-01-01

    Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation. PMID:28350358

  20. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    PubMed

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-05-28

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  1. A nanostructure based on metasurfaces for optical interconnects

    NASA Astrophysics Data System (ADS)

    Lin, Shulang; Gu, Huarong

    2017-08-01

    Optical-electronic Integrated Neural Co-processor takes vital part in optical neural network, which is mainly realized by optical interconnects. Because of the accuracy requirement and long-term goal of integration, optical interconnects should be effective and pint-size. In traditional solutions of optical interconnects, holography built on crystalloid or law of Fresnel diffraction exploited on zone plate was used. However, holographic method cannot meet the efficiency requirement and zone plate is too bulk to make the optical neural unit miniaturization. Thus, this paper aims to find a way to replace holographic method or zone plate with enough diffraction efficiency and smaller size. Metasurfaces are composed of subwavelength-spaced phase shifters at an interface of medium. Metasurfaces allow for unprecedented control of light properties. They also have advanced optical technology of enabling versatile functionalities in a planar structure. In this paper, a nanostructure is presented for optical interconnects. The comparisons of light splitting ability and simulated crosstalk between nanostructure and zone plate are also made.

  2. Pulse-by-pulse energy measurement at the Stanford Linear Collider

    NASA Astrophysics Data System (ADS)

    Blaylock, G.; Briggs, D.; Collins, B.; Petree, M.

    1992-01-01

    The Stanford Linear Collider (SLC) collides a beam of electrons and positrons at 92 GeV. It is the first colliding linac, and produces Z(sup 0) particles for High-Energy Physics measurements. The energy of each beam must be measured to one part in 10(exp 4) on every collision (120 Hz). An Energy Spectrometer in each beam line after the collision produces two stripes of high-energy synchrotron radiation with critical energy of a few MeV. The distance between these two stripes at an imaging plane measures the beam energy. The Wire-Imaging Synchrotron Radiation Detector (WISRD) system comprises a novel detector, data acquisition electronics, readout, and analysis. The detector comprises an array of wires for each synchrotron stripe. The electronics measure secondary emission charge on each wire of each array. A Macintosh II (using THINK C, THINK Class Library) and DSP coprocessor (using ANSI C) acquire and analyze the data, and display and report the results for SLC operation.

  3. Social Dynamics Management and Functional Behavioral Assessment

    ERIC Educational Resources Information Center

    Lee, David L.

    2018-01-01

    Managing social dynamics is a critical aspect of creating a positive learning environment in classrooms. In this paper three key interrelated ideas, reinforcement, function, and motivating operations, are discussed with relation to managing social behavior.

  4. Key Management Scheme Based on Route Planning of Mobile Sink in Wireless Sensor Networks.

    PubMed

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Jiang, Shengming; Chen, Wei

    2016-01-29

    In many wireless sensor network application scenarios the key management scheme with a Mobile Sink (MS) should be fully investigated. This paper proposes a key management scheme based on dynamic clustering and optimal-routing choice of MS. The concept of Traveling Salesman Problem with Neighbor areas (TSPN) in dynamic clustering for data exchange is proposed, and the selection probability is used in MS route planning. The proposed scheme extends static key management to dynamic key management by considering the dynamic clustering and mobility of MSs, which can effectively balance the total energy consumption during the activities. Considering the different resources available to the member nodes and sink node, the session key between cluster head and MS is established by modified an ECC encryption with Diffie-Hellman key exchange (ECDH) algorithm and the session key between member node and cluster head is built with a binary symmetric polynomial. By analyzing the security of data storage, data transfer and the mechanism of dynamic key management, the proposed scheme has more advantages to help improve the resilience of the key management system of the network on the premise of satisfying higher connectivity and storage efficiency.

  5. Effective user management with high strength crypto -key in dynamic group environment in cloud

    NASA Astrophysics Data System (ADS)

    Kumar, P. J.; Suganya, P.; Karthik, G.

    2017-11-01

    Cloud Clusters consists of various collections of files which are being accessed by multiple users of Cloud. The users are managed as a group and the association of the user to a particular group is dynamic in nature. Every group has a manager who handles the membership of a user to a particular group by issuing keys for encryption and decryption. Due to the dynamic nature of a user he/she may leave the group very frequently. But an attempt can be made by the user who has recently left the group to access a file maintained by that group. Key distribution becomes a critical issue while the behavior of the user is dynamic. Existing techniques to manage the users of group in terms of security and key distribution has been investigated so that to arrive at an objective to identify the scopes to increase security and key management scheme in cloud. The usage of various key combinations to measure the strength of security and efficiency of user management in dynamic cloud environment has been investigated.

  6. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707

  7. A "Sweet 16" of Rules About Teamwork

    NASA Technical Reports Server (NTRS)

    Laufer, Alexander (Editor)

    2002-01-01

    The following "Sweet 16" rules included in this paper derive from a longer paper by APPL Director Dr. Edward Hoffman and myself entitled " 99 Rules for Managing Faster, Better, Cheaper Projects." Our sources consisted mainly of "war stories" told by master project managers in my book Simultaneous Management: Managing Projects in a Dynamic Environment (AMACOM, The American Management Association, 1996). The Simultaneous Management model was a result of 10 years of intensive research and testing conducted with the active participation of master project managers from leading private organizations such as AT&T, DuPont, Exxon, General Motors, IBM, Motorola and Procter & Gamble. In a more recent study, led by Dr. Hoffman, we learned that master project managers in leading public organizations employ most of these rules as well. Both studies, in private and public organizations, found that a dynamic environment calls for dynamic management, and that is especially clear in how successful project managers think about their teams.

  8. Ecological and evolutionary approaches to managing honey bee disease

    PubMed Central

    Brosi, Berry J.; Delaplane, Keith S.; Boots, Michael; de Roode, Jacobus C.

    2017-01-01

    Honey bee declines are a serious threat to global agricultural security and productivity. While multiple factors contribute to these declines, parasites are a key driver. Disease problems in honey bees have intensified in recent years, despite increasing attention to addressing them. Here we argue that we must focus on the principles of disease ecology and evolution to understand disease dynamics, assess the severity of disease threats, and manage these threats via honey bee management. We cover the ecological context of honey bee disease, including both host and parasite factors driving current transmission dynamics, and then discuss evolutionary dynamics including how beekeeping management practices may drive selection for more virulent parasites. We then outline how ecological and evolutionary principles can guide disease mitigation in honey bees, including several practical management suggestions for addressing short- and long-term disease dynamics and consequences. PMID:29046562

  9. Organization Design for Dynamic Fit: A Review and Projection

    DTIC Science & Technology

    2014-01-01

    contingency misfits. Management Science 48(11): 1461-1485. Burton, RM, Obel B. 2004. Strategic organizational diagnosis and design: The dynamics of... organizational diagnosis and design : Developing theory for application (2nd ed.). Kluwer, Boston, MA. D’Aveni RA. 1994. Hypercompetition: Managing the dynamics

  10. Software Management Environment (SME) concepts and architecture, revision 1

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1992-01-01

    This document presents the concepts and architecture of the Software Management Environment (SME), developed for the Software Engineering Branch of the Flight Dynamic Division (FDD) of GSFC. The SME provides an integrated set of experience-based management tools that can assist software development managers in managing and planning flight dynamics software development projects. This document provides a high-level description of the types of information required to implement such an automated management tool.

  11. Role of lake-wide prey fish survey in understanding ecosystem dynamics and managing fisheries of Lake Michigan

    USGS Publications Warehouse

    Madenjian, Charles P.; Edsall, T.; Munawar, M.

    2005-01-01

    With this study, the role of this lake-wide prey fish survey in both understanding the dynamics of the Lake Michigan ecosystem and managing Lake Michigan fisheries was documented. The complexity of ecosystems is such that long-term study is required before the dynamics of the ecosystem can be understoond. Furthermore, long-term observation is needed before important or meaningful questions about ecosystem dynamics can be asked. My approach is to first illustrate, by example, the usefulness of the survey results in providing insights into the dynamics of the Lake Michigan ecosystem. Then, examples of direct application of the survey results toward Lake Michigan fisheries management are presented.

  12. Ecological and evolutionary approaches to managing honeybee disease.

    PubMed

    Brosi, Berry J; Delaplane, Keith S; Boots, Michael; de Roode, Jacobus C

    2017-09-01

    Honeybee declines are a serious threat to global agricultural security and productivity. Although multiple factors contribute to these declines, parasites are a key driver. Disease problems in honeybees have intensified in recent years, despite increasing attention to addressing them. Here we argue that we must focus on the principles of disease ecology and evolution to understand disease dynamics, assess the severity of disease threats, and control these threats via honeybee management. We cover the ecological context of honeybee disease, including both host and parasite factors driving current transmission dynamics, and then discuss evolutionary dynamics including how beekeeping management practices may drive selection for more virulent parasites. We then outline how ecological and evolutionary principles can guide disease mitigation in honeybees, including several practical management suggestions for addressing short- and long-term disease dynamics and consequences.

  13. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs — evaluation summary for ATDM program.

    DOT National Transportation Integrated Search

    2017-07-04

    The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of Dynamic Mobility Application (DMA) connected vehicle applications and Active Transportation and Dynamic management (ATDM...

  14. Symposium on Business and Management and Dynamic Simulation Models Supporting Management Strategies

    NASA Astrophysics Data System (ADS)

    Seimenis, Ioannis; Sakas, Damianos P.

    2009-08-01

    This preface presents the purpose, content and results of one of the ICCMSE 2008 symposiums organized by Prof. Ioannis Seimenis and Dr. Damianos P. Sakas. The present symposium aims at investigating Business and Management disciplines, as well as the prospect of strategic decision analysis by means of dynamic simulation models.

  15. Proceedings: integrated management and dynamics of forest defoliating insects

    Treesearch

    A.M. Liebhold; M.L. McManus; I.S. Otvos; S.L.C Fosbroke

    2001-01-01

    This publication contains 18 research papers about the population ecology and management of forest insect defoliators. These papers were presented at a joint meeting of working parties s7.03.06, "Integrated Management of Forest Defoliating Insects," and S7.03.07, "Population Dynamics of Forest Insects," of the International Union of...

  16. Simulating the effects of the southern pine beetle on regional dynamics 60 years into the future

    Treesearch

    Jennifer K. Costanza; Jiri Hulcr; Frank H. Koch; Todd Earnhardt; Alexa J. McKerrow; Rob R. Dunn; Jaime A. Collazo

    2012-01-01

    We developed a spatially explicit model that simulated future southern pine beetle (Dendroctonus frontalis, SPB) dynamics and pine forest management for a real landscape over 60 years to inform regional forest management. The SPB has a considerable effect on forest dynamics in the Southeastern United States, especially in loblolly pine (...

  17. Clustering Timber Harvests and the Effects of Dynamic Forest Management Policy on Forest Fragmentation

    Treesearch

    Eric J. Gustafson

    1998-01-01

    To integrate multiple uses (mature forest and commodity production) better on forested lands, timber management strategies that cluster harvests have been proposed. One such approach clusters harvest activity in space and time, and rotates timber production zones across the landscape with a long temporal period (dynamic zoning). Dynamic zoning has...

  18. Collective learning dynamics in behavioral crowds. Comment on "Human behaviours in evacuation crowd dynamics: From modeling to "big data" toward crisis management" by Nicola Bellomo et al.

    NASA Astrophysics Data System (ADS)

    Burini, D.

    2016-09-01

    A recent literature on crowd dynamics [9,10] has enlightened that the management of crisis situations needs models able to depict social behaviors and, in particular, the spread of emotional feelings such as stress by panic situation.

  19. Accelerating a MPEG-4 video decoder through custom software/hardware co-design

    NASA Astrophysics Data System (ADS)

    Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio

    2007-05-01

    In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.

  20. Comparative Performance Analysis of Coarse Solvers for Algebraic Multigrid on Multicore and Manycore Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Druinsky, Alex; Ghysels, Pieter; Li, Xiaoye S.

    In this paper, we study the performance of a two-level algebraic-multigrid algorithm, with a focus on the impact of the coarse-grid solver on performance. We consider two algorithms for solving the coarse-space systems: the preconditioned conjugate gradient method and a new robust HSS-embedded low-rank sparse-factorization algorithm. Our test data comes from the SPE Comparative Solution Project for oil-reservoir simulations. We contrast the performance of our code on one 12-core socket of a Cray XC30 machine with performance on a 60-core Intel Xeon Phi coprocessor. To obtain top performance, we optimized the code to take full advantage of fine-grained parallelism andmore » made it thread-friendly for high thread count. We also developed a bounds-and-bottlenecks performance model of the solver which we used to guide us through the optimization effort, and also carried out performance tuning in the solver’s large parameter space. Finally, as a result, significant speedups were obtained on both machines.« less

  1. An efficient MPI/OpenMP parallelization of the Hartree–Fock–Roothaan method for the first generation of Intel® Xeon Phi™ processor architecture

    DOE PAGES

    Mironov, Vladimir; Moskovsky, Alexander; D’Mello, Michael; ...

    2017-10-04

    The Hartree-Fock (HF) method in the quantum chemistry package GAMESS represents one of the most irregular algorithms in computation today. Major steps in the calculation are the irregular computation of electron repulsion integrals (ERIs) and the building of the Fock matrix. These are the central components of the main Self Consistent Field (SCF) loop, the key hotspot in Electronic Structure (ES) codes. By threading the MPI ranks in the official release of the GAMESS code, we not only speed up the main SCF loop (4x to 6x for large systems), but also achieve a significant (>2x) reduction in the overallmore » memory footprint. These improvements are a direct consequence of memory access optimizations within the MPI ranks. We benchmark our implementation against the official release of the GAMESS code on the Intel R Xeon PhiTM supercomputer. Here, scaling numbers are reported on up to 7,680 cores on Intel Xeon Phi coprocessors.« less

  2. Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2015-10-01

    The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.

  3. Research on control law accelerator of digital signal process chip TMS320F28035 for real-time data acquisition and processing

    NASA Astrophysics Data System (ADS)

    Zhao, Shuangle; Zhang, Xueyi; Sun, Shengli; Wang, Xudong

    2017-08-01

    TI C2000 series digital signal process (DSP) chip has been widely used in electrical engineering, measurement and control, communications and other professional fields, DSP TMS320F28035 is one of the most representative of a kind. When using the DSP program, need data acquisition and data processing, and if the use of common mode C or assembly language programming, the program sequence, analogue-to-digital (AD) converter cannot be real-time acquisition, often missing a lot of data. The control low accelerator (CLA) processor can run in parallel with the main central processing unit (CPU), and the frequency is consistent with the main CPU, and has the function of floating point operations. Therefore, the CLA coprocessor is used in the program, and the CLA kernel is responsible for data processing. The main CPU is responsible for the AD conversion. The advantage of this method is to reduce the time of data processing and realize the real-time performance of data acquisition.

  4. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

    PubMed Central

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-01-01

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382

  5. Conference on Real-Time Computer Applications in Nuclear, Particle and Plasma Physics, 6th, Williamsburg, VA, May 15-19, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Pordes, Ruth (Editor)

    1989-01-01

    Papers on real-time computer applications in nuclear, particle, and plasma physics are presented, covering topics such as expert systems tactics in testing FASTBUS segment interconnect modules, trigger control in a high energy physcis experiment, the FASTBUS read-out system for the Aleph time projection chamber, a multiprocessor data acquisition systems, DAQ software architecture for Aleph, a VME multiprocessor system for plasma control at the JT-60 upgrade, and a multiasking, multisinked, multiprocessor data acquisition front end. Other topics include real-time data reduction using a microVAX processor, a transputer based coprocessor for VEDAS, simulation of a macropipelined multi-CPU event processor for use in FASTBUS, a distributed VME control system for the LISA superconducting Linac, a distributed system for laboratory process automation, and a distributed system for laboratory process automation. Additional topics include a structure macro assembler for the event handler, a data acquisition and control system for Thomson scattering on ATF, remote procedure execution software for distributed systems, and a PC-based graphic display real-time particle beam uniformity.

  6. Low Cost SoC Design of H.264/AVC Decoder for Handheld Video Player

    NASA Astrophysics Data System (ADS)

    Wisayataksin, Sumek; Li, Dongju; Isshiki, Tsuyoshi; Kunieda, Hiroaki

    We propose a low cost and stand-alone platform-based SoC for H.264/AVC decoder, whose target is practical mobile applications such as a handheld video player. Both low cost and stand-alone solutions are particularly emphasized. The SoC, consisting of RISC core and decoder core, has advantages in terms of flexibility, testability and various I/O interfaces. For decoder core design, the proposed H.264/AVC coprocessor in the SoC employs a new block pipelining scheme instead of a conventional macroblock or a hybrid one, which greatly contribute to reducing drastically the size of the core and its pipelining buffer. In addition, the decoder schedule is optimized to block level which is easy to be programmed. Actually, the core size is reduced to 138 KGate with 3.5 kbyte memory. In our practical development, a single external SDRAM is sufficient for both reference frame buffer and display buffer. Various peripheral interfaces such as a compact flash, a digital broadcast receiver and a LCD driver are also provided on a chip.

  7. Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™

    PubMed Central

    Gomes, Jeremias M.; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H.

    2016-01-01

    We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel® Xeon Phi™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP’s irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63× on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7× and 1.62×, respectively, as compared to efficient CPU and GPU implementations. PMID:27298591

  8. Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™.

    PubMed

    Gomes, Jeremias M; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H

    2015-10-01

    We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel ® Xeon Phi ™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP's irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63 × on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7 × and 1.62 × , respectively, as compared to efficient CPU and GPU implementations.

  9. Communication Studies of DMP and SMP Machines

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    Understanding the interplay between machines and problems is key to obtaining high performance on parallel machines. This paper investigates the interplay between programming paradigms and communication capabilities of parallel machines. In particular, we explicate the communication capabilities of the IBM SP-2 distributed-memory multiprocessor and the SGI PowerCHALLENGEarray symmetric multiprocessor. Two benchmark problems of bitonic sorting and Fast Fourier Transform are selected for experiments. Communication-efficient algorithms are developed to exploit the overlapping capabilities of the machines. Programs are written in Message-Passing Interface for portability and identical codes are used for both machines. Various data sizes and message sizes are used to test the machines' communication capabilities. Experimental results indicate that the communication performance of the multiprocessors are consistent with the size of messages. The SP-2 is sensitive to message size but yields a much higher communication overlapping because of the communication co-processor. The PowerCHALLENGEarray is not highly sensitive to message size and yields a low communication overlapping. Bitonic sorting yields lower performance compared to FFT due to a smaller computation-to-communication ratio.

  10. A GPU-Based Architecture for Real-Time Data Assessment at Synchrotron Experiments

    NASA Astrophysics Data System (ADS)

    Chilingaryan, Suren; Mirone, Alessandro; Hammersley, Andrew; Ferrero, Claudio; Helfen, Lukas; Kopmann, Andreas; Rolo, Tomy dos Santos; Vagovic, Patrik

    2011-08-01

    Advances in digital detector technology leads presently to rapidly increasing data rates in imaging experiments. Using fast two-dimensional detectors in computed tomography, the data acquisition can be much faster than the reconstruction if no adequate measures are taken, especially when a high photon flux at synchrotron sources is used. We have optimized the reconstruction software employed at the micro-tomography beamlines of our synchrotron facilities to use the computational power of modern graphic cards. The main paradigm of our approach is the full utilization of all system resources. We use a pipelined architecture, where the GPUs are used as compute coprocessors to reconstruct slices, while the CPUs are preparing the next ones. Special attention is devoted to minimize data transfers between the host and GPU memory and to execute memory transfers in parallel with the computations. We were able to reduce the reconstruction time by a factor 30 and process a typical data set of 20 GB in 40 seconds. The time needed for the first evaluation of the reconstructed sample is reduced significantly and quasi real-time visualization is now possible.

  11. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    PubMed

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  12. The research and application of multi-biometric acquisition embedded system

    NASA Astrophysics Data System (ADS)

    Deng, Shichao; Liu, Tiegen; Guo, Jingjing; Li, Xiuyan

    2009-11-01

    The identification technology based on multi-biometric can greatly improve the applicability, reliability and antifalsification. This paper presents a multi-biometric system bases on embedded system, which includes: three capture daughter boards are applied to obtain different biometric: one each for fingerprint, iris and vein of the back of hand; FPGA (Field Programmable Gate Array) is designed as coprocessor, which uses to configure three daughter boards on request and provides data path between DSP (digital signal processor) and daughter boards; DSP is the master processor and its functions include: control the biometric information acquisition, extracts feature as required and responsible for compare the results with the local database or data server through network communication. The advantages of this system were it can acquire three different biometric in real time, extracts complexity feature flexibly in different biometrics' raw data according to different purposes and arithmetic and network interface on the core-board will be the solution of big data scale. Because this embedded system has high stability, reliability, flexibility and fit for different data scale, it can satisfy the demand of multi-biometric recognition.

  13. Research a Novel Integrated and Dynamic Multi-object Trade-Off Mechanism in Software Project

    NASA Astrophysics Data System (ADS)

    Jiang, Weijin; Xu, Yuhui

    Aiming at practical requirements of present software project management and control, the paper presented to construct integrated multi-object trade-off model based on software project process management, so as to actualize integrated and dynamic trade-oil of the multi-object system of project. Based on analyzing basic principle of dynamic controlling and integrated multi-object trade-off system process, the paper integrated method of cybernetics and network technology, through monitoring on some critical reference points according to the control objects, emphatically discussed the integrated and dynamic multi- object trade-off model and corresponding rules and mechanism in order to realize integration of process management and trade-off of multi-object system.

  14. The impact of chief executive officer personality on top management team dynamics:one mechanism by which leadership affects organizational performance.

    PubMed

    Peterson, Randall S; Smith, D Brent; Martorana, Paul V; Owens, Pamela D

    2003-10-01

    This article explores 1 mechanism by which leader personality affects organizational performance. The authors hypothesized and tested the effects of leader personality on the group dynamics of the top management team (TMT) and of TMT dynamics on organizational performance. To test their hypotheses, the authors used the group dynamics q-sort method, which is designed to permit rigorous, quantitative comparisons of data derived from qualitative sources. Results from independent observations of chief executive officer (CEO) personality and TMT dynamics for 17 CEOs supported the authors' hypothesized relationships both between CEO personality and TMT group dynamics and between TMT dynamics and organizational performance.

  15. Dynamics of safety performance and culture: a group model building approach.

    PubMed

    Goh, Yang Miang; Love, Peter E D; Stagbouer, Greg; Annesley, Chris

    2012-09-01

    The management of occupational health and safety (OHS) including safety culture interventions is comprised of complex problems that are often hard to scope and define. Due to the dynamic nature and complexity of OHS management, the concept of system dynamics (SD) is used to analyze accident prevention. In this paper, a system dynamics group model building (GMB) approach is used to create a causal loop diagram of the underlying factors influencing the OHS performance of a major drilling and mining contractor in Australia. While the organization has invested considerable resources into OHS their disabling injury frequency rate (DIFR) has not been decreasing. With this in mind, rich individualistic knowledge about the dynamics influencing the DIFR was acquired from experienced employees with operations, health and safety and training background using a GMB workshop. Findings derived from the workshop were used to develop a series of causal loop diagrams that includes a wide range of dynamics that can assist in better understanding the causal influences OHS performance. The causal loop diagram provides a tool for organizations to hypothesize the dynamics influencing effectiveness of OHS management, particularly the impact on DIFR. In addition the paper demonstrates that the SD GMB approach has significant potential in understanding and improving OHS management. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Sensitivity of emergent sociohydrologic dynamics to internal system properties and external sociopolitical factors: Implications for water management

    NASA Astrophysics Data System (ADS)

    Elshafei, Y.; Tonts, M.; Sivapalan, M.; Hipsey, M. R.

    2016-06-01

    It is increasingly acknowledged that effective management of water resources requires a holistic understanding of the coevolving dynamics inherent in the coupled human-hydrology system. One of the fundamental information gaps concerns the sensitivity of coupled system feedbacks to various endogenous system properties and exogenous societal contexts. This paper takes a previously calibrated sociohydrology model and applies an idealized implementation, in order to: (i) explore the sensitivity of emergent dynamics resulting from bidirectional feedbacks to assumptions regarding (a) internal system properties that control the internal dynamics of the coupled system and (b) the external sociopolitical context; and (ii) interpret the results within the context of water resource management decision making. The analysis investigates feedback behavior in three ways, (a) via a global sensitivity analysis on key parameters and assessment of relevant model outputs, (b) through a comparative analysis based on hypothetical placement of the catchment along various points on the international sociopolitical gradient, and (c) by assessing the effects of various direct management intervention scenarios. Results indicate the presence of optimum windows that might offer the greatest positive impact per unit of management effort. Results further advocate management tools that encourage an adaptive learning, community-based approach with respect to water management, which are found to enhance centralized policy measures. This paper demonstrates that it is possible to use a place-based sociohydrology model to make abstractions as to the dynamics of bidirectional feedback behavior, and provide insights as to the efficacy of water management tools under different circumstances.

  17. Improving Pedagogy through the Use of Dynamic Excel Presentations in Financial Management Courses

    ERIC Educational Resources Information Center

    Mangiero, George A.; Manley, John; Mollica, J. T.

    2010-01-01

    This paper discusses and illustrates the use of dynamic Excel presentations to improve learning in Financial Management courses. Through the use of such presentations, multiple and varied examples of important principles in Financial Management, which would ordinarily take an excessive amount of time to cover, can be considered within the time…

  18. Proceedings: population dynamics, impacts, and integrated management of forest defoliating insects

    Treesearch

    M.L. McManus; A.M., eds. Liebhold

    1998-01-01

    This publication contains 52 research papers about the population ecology and management of forest insect defoliators. These papers were presented at a joint meeting of working parties S7.03.06, "Integrated Management of Forest Defoliating Insects", and S7.03.07, "Population dynamics of forest insects", of the International Union of Forestry...

  19. Comparison of Controller and Flight Deck Algorithm Performance During Interval Management with Dynamic Arrival Trees (STARS)

    NASA Technical Reports Server (NTRS)

    Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.

    2012-01-01

    Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.

  20. Resource Management in Constrained Dynamic Situations

    NASA Astrophysics Data System (ADS)

    Seok, Jinwoo

    Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments for the planning level. To obtain a policy, dynamic programing is applied, and to obtain a solution, limited breadth-first search is applied to the recomposable restricted finite state machine. A multi-function phased array radar resource management problem and an unmanned aerial vehicle patrolling problem are treated using recomposable restricted finite state machines. Then, we use model predictive control for the control level, because it allows constraint handling and setpoint tracking for the schedule. An aircraft power system management problem is treated that aims to develop an integrated control system for an aircraft gas turbine engine and electrical power system using rate-based model predictive control. Our results indicate that at the planning level, limited breadth-first search for recomposable restricted finite state machines generates good scheduling solutions in limited resource situations and unpredictably dynamic environments. The importance of cooperation in the planning level is also verified. At the control level, a rate-based model predictive controller allows good schedule tracking and safe operations. The importance of considering the system constraints and interactions between the subsystems is indicated. For the best resource management in constrained dynamic situations, the planning level and the control level need to be considered together.

  1. Modeling wildfire incident complexity dynamics.

    PubMed

    Thompson, Matthew P

    2013-01-01

    Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management.

  2. Crowd dynamics and safety. Reply to comments on "Human behaviours in evacuation crowd dynamics: From modelling to "big data" toward crisis management"

    NASA Astrophysics Data System (ADS)

    Bellomo, N.; Clarke, D.; Gibelli, L.; Townsend, P.; Vreugdenhil, B. J.

    2016-09-01

    The survey [13] presents an overview and critical analysis of the existing literature on the modeling of crowd dynamics related to crisis management toward the search of safety conditions. Out of this general review some rationale on research perspectives have been brought to the attention of the reader.

  3. Comparative analysis of dynamic pricing strategies for managed lanes.

    DOT National Transportation Integrated Search

    2015-06-01

    The objective of this research is to investigate and compare the performances of different : dynamic pricing strategies for managed lanes facilities. These pricing strategies include real-time : traffic responsive methods, as well as refund options a...

  4. Integral control for population management.

    PubMed

    Guiver, Chris; Logemann, Hartmut; Rebarber, Richard; Bill, Adam; Tenhumberg, Brigitte; Hodgson, Dave; Townley, Stuart

    2015-04-01

    We present a novel management methodology for restocking a declining population. The strategy uses integral control, a concept ubiquitous in control theory which has not been applied to population dynamics. Integral control is based on dynamic feedback-using measurements of the population to inform management strategies and is robust to model uncertainty, an important consideration for ecological models. We demonstrate from first principles why such an approach to population management is suitable via theory and examples.

  5. Multi-criteria dynamic decision under uncertainty: a stochastic viability analysis and an application to sustainable fishery management.

    PubMed

    De Lara, M; Martinet, V

    2009-02-01

    Managing natural resources in a sustainable way is a hard task, due to uncertainties, dynamics and conflicting objectives (ecological, social, and economical). We propose a stochastic viability approach to address such problems. We consider a discrete-time control dynamical model with uncertainties, representing a bioeconomic system. The sustainability of this system is described by a set of constraints, defined in practice by indicators - namely, state, control and uncertainty functions - together with thresholds. This approach aims at identifying decision rules such that a set of constraints, representing various objectives, is respected with maximal probability. Under appropriate monotonicity properties of dynamics and constraints, having economic and biological content, we characterize an optimal feedback. The connection is made between this approach and the so-called Management Strategy Evaluation for fisheries. A numerical application to sustainable management of Bay of Biscay nephrops-hakes mixed fishery is given.

  6. Management of complex dynamical systems

    NASA Astrophysics Data System (ADS)

    MacKay, R. S.

    2018-02-01

    Complex dynamical systems are systems with many interdependent components which evolve in time. One might wish to control their trajectories, but a more practical alternative is to control just their statistical behaviour. In many contexts this would be both sufficient and a more realistic goal, e.g. climate and socio-economic systems. I refer to it as ‘management’ of complex dynamical systems. In this paper, some mathematics for management of complex dynamical systems is developed in the weakly dependent regime, and questions are posed for the strongly dependent regime.

  7. NASA Tech Briefs, May 2010

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Topics covered include: Instrument for Analysis of Greenland's Glacier Mills Cryogenic Moisture Apparatus; A Transportable Gravity Gradiometer Based on Atom Interferometry; Three Methods of Detection of Hydrazines; Crossed, Small-Deflection Energy Analyzer for Wind/Temperature Spectrometer; Wavefront Correction for Large, Flexible Antenna Reflector; Novel Micro Strip-to-Waveguide Feed Employing a Double-Y Junction; Thin-Film Ferro Electric-Coupled Microstripline Phase Shifters With Reduced Device Hysteresis; Two-Stage, 90-GHz, Low-Noise Amplifier; A 311-GHz Fundamental Oscillator Using InP HBT Technology; FPGA Coprocessor Design for an Onboard Multi-Angle Spectro-Polarimetric Imager; Serrating Nozzle Surfaces for Complete Transfer of Droplets; Turbomolecular Pumps for Holding Gases in Open Containers; Triaxial Swirl Injector Element for Liquid-Fueled Engines; Integrated Budget Office Toolbox; PLOT3D Export Tool for Tecplot; Math Description Engine Software Development Kit; Astronaut Office Scheduling System Software; ISS Solar Array Management; Probabilistic Structural Analysis Program; SPOT Program; Integrated Hybrid System Architecture for Risk Analysis; System for Packaging Planetary Samples for Return to Earth; Offset Compound Gear Drive; Low-Dead-Volume Inlet for Vacuum Chamber; Simple Check Valves for Microfluidic Devices; A Capillary-Based Static Phase Separator for Highly Variable Wetting Conditions; Gimballing Spacecraft Thruster; Finned Carbon-Carbon Heat Pipe with Potassium Working Fluid; Lightweight Heat Pipes Made from Magnesium; Ceramic Rail-Race Ball Bearings; Improved OTEC System for a Submarine Robot; Reflector Surface Error Compensation in Dual-Reflector Antennas; Enriched Storable Oxidizers for Rocket Engines; Planar Submillimeter-Wave Mixer Technology with Integrated Antenna; Widely Tunable Mode-Hop-Free External-Cavity Quantum Cascade Laser; Non-Geiger-Mode Single-Photon Avalanche Detector with Low Excess Noise; Using Whispering-Gallery-Mode Resonators for Refractometry; RF Device for Acquiring Images of the Human Body; Reactive Collision Avoidance Algorithm; Fast Solution in Sparse LDA for Binary Classification; Modeling Common-Sense Decisions in Artificial Intelligence; Graph-Based Path-Planning for Titan Balloons; Nanolaminate Membranes as Cylindrical Telescope Reflectors; Air-Sea Spray Airborne Radar Profiler Characterizes Energy Fluxes in Hurricanes; Large Telescope Segmented Primary Mirror Alignment; and Simplified Night Sky Display System.

  8. ALPS - A LINEAR PROGRAM SOLVER

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  9. Design and Analysis of a Dynamic Mobility Management Scheme for Wireless Mesh Network

    PubMed Central

    Roy, Sudipta

    2013-01-01

    Seamless mobility management of the mesh clients (MCs) in wireless mesh network (WMN) has drawn a lot of attention from the research community. A number of mobility management schemes such as mesh network with mobility management (MEMO), mesh mobility management (M3), and wireless mesh mobility management (WMM) have been proposed. The common problem with these schemes is that they impose uniform criteria on all the MCs for sending route update message irrespective of their distinct characteristics. This paper proposes a session-to-mobility ratio (SMR) based dynamic mobility management scheme for handling both internet and intranet traffic. To reduce the total communication cost, this scheme considers each MC's session and mobility characteristics by dynamically determining optimal threshold SMR value for each MC. A numerical analysis of the proposed scheme has been carried out. Comparison with other schemes shows that the proposed scheme outperforms MEMO, M3, and WMM with respect to total cost. PMID:24311982

  10. Application of dynamic traffic assignment to advanced managed lane modeling.

    DOT National Transportation Integrated Search

    2013-11-01

    In this study, a demand estimation framework is developed for assessing the managed lane (ML) : strategies by utilizing dynamic traffic assignment (DTA) modeling, instead of the traditional : approaches that are based on the static traffic assignment...

  11. Investigating the Use of the Intel Xeon Phi for Event Reconstruction

    NASA Astrophysics Data System (ADS)

    Sherman, Keegan; Gilfoyle, Gerard

    2014-09-01

    The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. Work supported by the University of Richmond and the US Department of Energy.

  12. Breaking down barriers in cooperative fault management: Temporal and functional information displays

    NASA Technical Reports Server (NTRS)

    Potter, Scott S.; Woods, David D.

    1994-01-01

    At the highest level, the fundamental question addressed by this research is how to aid human operators engaged in dynamic fault management. In dynamic fault management there is some underlying dynamic process (an engineered or physiological process referred to as the monitored process - MP) whose state changes over time and whose behavior must be monitored and controlled. In these types of applications (dynamic, real-time systems), a vast array of sensor data is available to provide information on the state of the MP. Faults disturb the MP and diagnosis must be performed in parallel with responses to maintain process integrity and to correct the underlying problem. These situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple interleaved tasks.

  13. Multiple scales modelling approaches to social interaction in crowd dynamics and crisis management. Comment on "Human behaviours in evacuation crowd dynamics: From modelling to "big data" toward crisis management" by Nicola Bellomo et al.

    NASA Astrophysics Data System (ADS)

    Trucu, Dumitru

    2016-09-01

    In this comprehensive review concerning the modelling of human behaviours in crowd dynamics [3], the authors explore a wide range of mathematical approaches spanning over multiple scales that are suitable to describe emerging crowd behaviours in extreme situations. Focused on deciphering the key aspects leading to emerging crowd patterns evolutions in challenging times such as those requiring an evacuation on a complex venue, the authors address this complex dynamics at both microscale (individual level), mesoscale (probability distributions of interacting individuals), and macroscale (population level), ultimately aiming to gain valuable understanding and knowledge that would inform decision making in managing crisis situations.

  14. Spatiotemporal dynamics of simulated wildfire, forest management, and forest succession in central Oregon, USA

    Treesearch

    Ana M. G. Barros; Alan A. Ager; Michelle A. Day; Haiganoush K. Preisler; Thomas A. Spies; Eric White; Robert J. Pabst; Keith A. Olsen; Emily Platt; John D. Bailey; John P. Bolte

    2017-01-01

    We use the simulation model Envision to analyze long-term wildfire dynamics and the effects of different fuel management scenarios in central Oregon, USA. We simulated a 50-year future where fuel management activities were increased by doubling and tripling the current area treated while retaining existing treatment strategies in terms of spatial distribution and...

  15. School Crisis Management: A Model of Dynamic Responsiveness to Crisis Life Cycle

    ERIC Educational Resources Information Center

    Liou, Yi-Hwa

    2015-01-01

    Purpose: This study aims to analyze a school's crisis management and explore emerging aspects of its response to a school crisis. Traditional linear modes of analysis often fail to address complex crisis situations. The present study applied a dynamic crisis life cycle model that draws on chaos and complexity theory to a crisis management case,…

  16. Rethinking Traffic Management: Design of Optimizable Networks

    DTIC Science & Technology

    2008-06-01

    Though this paper used optimization theory to design and analyze DaVinci , op- timization theory is one of many possible tools to enable a grounded...dynamically allocate bandwidth shares. The distributed protocols can be implemented using DaVinci : Dynamically Adaptive VIrtual Networks for a Customized...Internet. In DaVinci , each virtual network runs traffic-management protocols optimized for a traffic class, and link bandwidth is dynamically allocated

  17. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    NASA Astrophysics Data System (ADS)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.

  18. Connected vehicle data capture and management (DCM) and dynamic mobility applications (DMA) : focused standards coordination plan.

    DOT National Transportation Integrated Search

    2012-11-01

    The Connected Vehicle Mobility Standards Coordination Plan project links activities in three programs (Data Capture and Management, Dynamic Mobility Applications, and ITS Standards). The plan coordinates the timing, intent and relationship of activit...

  19. The dual impact of ecology and management on social incentives in marine common-pool resource systems.

    PubMed

    Klein, E S; Barbier, M R; Watson, J R

    2017-08-01

    Understanding how and when cooperative human behaviour forms in common-pool resource systems is critical to illuminating social-ecological systems and designing governance institutions that promote sustainable resource use. Before assessing the full complexity of social dynamics, it is essential to understand, concretely and mechanistically, how resource dynamics and human actions interact to create incentives and pay-offs for social behaviours. Here, we investigated how such incentives for information sharing are affected by spatial dynamics and management in a common-pool resource system. Using interviews with fishermen to inform an agent-based model, we reveal generic mechanisms through which, for a given ecological setting characterized by the spatial dynamics of the resource, the two 'human factors' of information sharing and management may heterogeneously impact various members of a group for whom theory would otherwise predict the same strategy. When users can deplete the resource, these interactions are further affected by the management approach. Finally, we discuss the implications of alternative motivations, such as equity among fishermen and consistency of the fleet's output. Our results indicate that resource spatial dynamics, form of management and level of depletion can interact to alter the sociality of people in common-pool resource systems, providing necessary insight for future study of strategic decision processes.

  20. Implementation of Advanced Inventory Management Functionality in Automated Dispensing Cabinets

    PubMed Central

    Webb, Aaron; Lund, Jim

    2015-01-01

    Background: Automated dispensing cabinets (ADCs) are an integral component of distribution models in pharmacy departments across the country. There are significant challenges to optimizing ADC inventory management while minimizing use of labor and capital resources. The role of enhanced inventory control functionality is not fully defined. Objective: The aim of this project is to improve ADC inventory management by leveraging dynamic inventory standards and a low inventory alert platform. Methods: Two interventional groups and 1 historical control were included in the study. Each intervention group consisted of 6 ADCs that tested enhanced inventory management functionality. Interventions included dynamic inventory standards and a low inventory alert messaging system. Following separate implementation of each platform, dynamic inventory and low inventory alert systems were applied concurrently to all 12 ADCs. Outcome measures included number and duration of daily stockouts, ADC inventory turns, and number of phone calls related to stockouts received by pharmacy staff. Results: Low inventory alerts reduced both the number and duration of stockouts. Dynamic inventory standards reduced the number of daily stockouts without changing the inventory turns and duration of stockouts. No change was observed in number of calls related to stockouts made to pharmacy staff. Conclusions: Low inventory alerts and dynamic inventory standards are feasible mechanisms to help optimize ADC inventory management while minimizing labor and capital resources. PMID:26448672

  1. Water resources management in a homogenizing world: Averting the Growth and Underinvestment trajectory

    NASA Astrophysics Data System (ADS)

    Mirchi, Ali; Watkins, David W.; Huckins, Casey J.; Madani, Kaveh; Hjorth, Peder

    2014-09-01

    Biotic homogenization, a de facto symptom of a global biodiversity crisis, underscores the urgency of reforming water resources management to focus on the health and viability of ecosystems. Global population and economic growth, coupled with inadequate investment in maintenance of ecological systems, threaten to degrade environmental integrity and ecosystem services that support the global socioeconomic system, indicative of a system governed by the Growth and Underinvestment (G&U) archetype. Water resources management is linked to biotic homogenization and degradation of system integrity through alteration of water systems, ecosystem dynamics, and composition of the biota. Consistent with the G&U archetype, water resources planning primarily treats ecological considerations as exogenous constraints rather than integral, dynamic, and responsive parts of the system. It is essential that the ecological considerations be made objectives of water resources development plans to facilitate the analysis of feedbacks and potential trade-offs between socioeconomic gains and ecological losses. We call for expediting a shift to ecosystem-based management of water resources, which requires a better understanding of the dynamics and links between water resources management actions, ecological side-effects, and associated long-term ramifications for sustainability. To address existing knowledge gaps, models that include dynamics and estimated thresholds for regime shifts or ecosystem degradation need to be developed. Policy levers for implementation of ecosystem-based water resources management include shifting away from growth-oriented supply management, better demand management, increased public awareness, and institutional reform that promotes adaptive and transdisciplinary management approaches.

  2. YOUNG ADULT DATING RELATIONSHIPS AND THE MANAGEMENT OF SEXUAL RISK.

    PubMed

    Manning, Wendy D; Giordano, Peggy C; Longmore, Monica A; Flanigan, Christine M

    2012-04-01

    Young adult involvement in sexual behavior typically occurs within a relationship context, but we know little about the ways in which specific features of romantic relationships influence sexual decision-making. Prior work on sexual risk taking focuses attention on health issues rather than relationship dynamics. We draw on data from the Toledo Adolescent Relationships Study (TARS) (n = 475) to examine the association between qualities and dynamics of current/most recent romantic relationships such as communication and emotional processes, conflict, demographic asymmetries, and duration and the management of sexual risk. We conceptualize 'risk management' as encompassing multiple domains, including (1) questioning the partner about previous sexual behaviors/risks, (2) using condoms consistently, and (3) maintaining sexual exclusivity within the relationship. We identify distinct patterns of risk management among dating young adults and find that specific qualities and dynamics of these relationships are linked to variations in risk management. Results from this paper suggest the need to consider relational dynamics in efforts to target and influence young adult sexual risk-taking and reduce STIs, including HIV.

  3. A proposed concept for a crustal dynamics information management network

    NASA Technical Reports Server (NTRS)

    Lohman, G. M.; Renfrow, J. T.

    1980-01-01

    The findings of a requirements and feasibility analysis of the present and potential producers, users, and repositories of space-derived geodetic information are summarized. A proposed concept is presented for a crustal dynamics information management network that would apply state of the art concepts of information management technology to meet the expanding needs of the producers, users, and archivists of this geodetic information.

  4. Unified sensor management in unknown dynamic clutter

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald; El-Fallah, Adel

    2010-04-01

    In recent years the first author has developed a unified, computationally tractable approach to multisensor-multitarget sensor management. This approach consists of closed-loop recursion of a PHD or CPHD filter with maximization of a "natural" sensor management objective function called PENT (posterior expected number of targets). In this paper we extend this approach so that it can be used in unknown, dynamic clutter backgrounds.

  5. A Dynamic Simulation Model of the Management Accounting Information Systems (MAIS)

    NASA Astrophysics Data System (ADS)

    Konstantopoulos, Nikolaos; Bekiaris, Michail G.; Zounta, Stella

    2007-12-01

    The aim of this paper is to examine the factors which determine the problems and the advantages on the design of management accounting information systems (MAIS). A simulation is carried out with a dynamic model of the MAIS design.

  6. A Method for Dynamic Risk Assessment and Management of Rockbursts in Drill and Blast Tunnels

    NASA Astrophysics Data System (ADS)

    Liu, Guo-Feng; Feng, Xia-Ting; Feng, Guang-Liang; Chen, Bing-Rui; Chen, Dong-Fang; Duan, Shu-Qian

    2016-08-01

    Focusing on the problems caused by rockburst hazards in deep tunnels, such as casualties, damage to construction equipment and facilities, construction schedule delays, and project cost increase, this research attempts to present a methodology for dynamic risk assessment and management of rockbursts in D&B tunnels. The basic idea of dynamic risk assessment and management of rockbursts is determined, and methods associated with each step in the rockburst risk assessment and management process are given, respectively. Among them, the main parts include a microseismic method for early warning the occurrence probability of rockburst risk, an estimation method that aims to assess potential consequences of rockburst risk, an evaluation method that utilizes a new quantitative index considering both occurrence probability and consequences for determining the level of rockburst risk, and the dynamic updating. Specifically, this research briefly describes the referenced microseismic method of warning rockburst, but focuses on the analysis of consequences and associated risk assessment and management of rockburst. Using the proposed method of risk assessment and management of rockburst, the occurrence probability, potential consequences, and the level of rockburst risk can be obtained in real-time during tunnel excavation, which contributes to the dynamic optimisation of risk mitigation measures and their application. The applicability of the proposed method has been verified by those cases from the Jinping II deep headrace and water drainage tunnels at depths of 1900-2525 m (with a length of 11.6 km in total for D&B tunnels).

  7. Stream dynamics: An overview for land managers

    Treesearch

    Burchard H. Heede

    1980-01-01

    Concepts of stream dynamics are demonstrated through discussion of processes and process indicators; theory is included only where helpful to explain concepts. Present knowledge allows only qualitative prediction of stream behavior. However, such predictions show how management actions will affect the stream and its environment.

  8. Buffer Management Simulation in ATM Networks

    NASA Technical Reports Server (NTRS)

    Yaprak, E.; Xiao, Y.; Chronopoulos, A.; Chow, E.; Anneberg, L.

    1998-01-01

    This paper presents a simulation of a new dynamic buffer allocation management scheme in ATM networks. To achieve this objective, an algorithm that detects congestion and updates the dynamic buffer allocation scheme was developed for the OPNET simulation package via the creation of a new ATM module.

  9. System Dynamics Modeling of Transboundary Systems: The Bear River Basin Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerald Sehlke; Jake Jacobson

    2005-09-01

    System dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, system dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The Idaho National Engineering and Environmental Laboratory, a multi-purpose national laboratory managed by the Department of Energy, has developed a systems dynamics model in order to evaluate its utility for modeling large complex hydrological systems. We modeled the Bear River Basin, a transboundary basin that includes portions of Idaho,more » Utah and Wyoming. We found that system dynamics modeling is very useful for integrating surface water and groundwater data and for simulating the interactions between these sources within a given basin. In addition, we also found system dynamics modeling is useful for integrating complex hydrologic data with other information (e.g., policy, regulatory and management criteria) to produce a decision support system. Such decision support systems can allow managers and stakeholders to better visualize the key hydrologic elements and management constraints in the basin, which enables them to better understand the system via the simulation of multiple “what-if” scenarios. Although system dynamics models can be developed to conduct traditional hydraulic/hydrologic surface water or groundwater modeling, we believe that their strength lies in their ability to quickly evaluate trends and cause–effect relationships in large-scale hydrological systems; for integrating disparate data; for incorporating output from traditional hydraulic/hydrologic models; and for integration of interdisciplinary data, information and criteria to support better management decisions.« less

  10. System Dynamics Modeling of Transboundary Systems: the Bear River Basin Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerald Sehlke; Jacob J. Jacobson

    2005-09-01

    System dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, system dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The Idaho National Engineering and Environmental Laboratory, a multi-purpose national laboratory managed by the Department of Energy, has developed a systems dynamics model in order to evaluate its utility for modeling large complex hydrological systems. We modeled the Bear River Basin, a transboundary basin that includes portions of Idaho,more » Utah and Wyoming. We found that system dynamics modeling is very useful for integrating surface water and ground water data and for simulating the interactions between these sources within a given basin. In addition, we also found system dynamics modeling is useful for integrating complex hydrologic data with other information (e.g., policy, regulatory and management criteria) to produce a decision support system. Such decision support systems can allow managers and stakeholders to better visualize the key hydrologic elements and management constraints in the basin, which enables them to better understand the system via the simulation of multiple “what-if” scenarios. Although system dynamics models can be developed to conduct traditional hydraulic/hydrologic surface water or ground water modeling, we believe that their strength lies in their ability to quickly evaluate trends and cause–effect relationships in large-scale hydrological systems; for integrating disparate data; for incorporating output from traditional hydraulic/hydrologic models; and for integration of interdisciplinary data, information and criteria to support better management decisions.« less

  11. Ranking landscape development scenarios affecting natterjack toad (Bufo calamita) population dynamics in Central Poland.

    PubMed

    Franz, Kamila W; Romanowski, Jerzy; Johst, Karin; Grimm, Volker

    2013-01-01

    When data are limited it is difficult for conservation managers to assess alternative management scenarios and make decisions. The natterjack toad (Bufo calamita) is declining at the edges of its distribution range in Europe and little is known about its current distribution and abundance in Poland. Although different landscape management plans for central Poland exist, it is unclear to what extent they impact this species. Based on these plans, we investigated how four alternative landscape development scenarios would affect the total carrying capacity and population dynamics of the natterjack toad. To facilitate decision-making, we first ranked the scenarios according to their total carrying capacity. We used the software RAMAS GIS to determine the size and location of habitat patches in the landscape. The estimated carrying capacities were very similar for each scenario, and clear ranking was not possible. Only the reforestation scenario showed a marked loss in carrying capacity. We therefore simulated metapopulation dynamics with RAMAS taking into account dynamical processes such as reproduction and dispersal and ranked the scenarios according to the resulting species abundance. In this case, we could clearly rank the development scenarios. We identified road mortality of adults as a key process governing the dynamics and separating the different scenarios. The renaturalisation scenario clearly ranked highest due to its decreased road mortality. Taken together our results suggest that road infrastructure development might be much more important for natterjack toad conservation than changes in the amount of habitat in the semi-natural river valley. We gained these insights by considering both the resulting metapopulation structure and dynamics in the form of a PVA. We conclude that the consideration of dynamic processes in amphibian conservation management may be indispensable for ranking management scenarios.

  12. A system dynamic modeling approach for evaluating municipal solid waste generation, landfill capacity and related cost management issues.

    PubMed

    Kollikkathara, Naushad; Feng, Huan; Yu, Danlin

    2010-11-01

    As planning for sustainable municipal solid waste management has to address several inter-connected issues such as landfill capacity, environmental impacts and financial expenditure, it becomes increasingly necessary to understand the dynamic nature of their interactions. A system dynamics approach designed here attempts to address some of these issues by fitting a model framework for Newark urban region in the US, and running a forecast simulation. The dynamic system developed in this study incorporates the complexity of the waste generation and management process to some extent which is achieved through a combination of simpler sub-processes that are linked together to form a whole. The impact of decision options on the generation of waste in the city, on the remaining landfill capacity of the state, and on the economic cost or benefit actualized by different waste processing options are explored through this approach, providing valuable insights into the urban waste-management process. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. An energy-efficient MAC protocol using dynamic queue management for delay-tolerant mobile sensor networks.

    PubMed

    Li, Jie; Li, Qiyue; Qu, Yugui; Zhao, Baohua

    2011-01-01

    Conventional MAC protocols for wireless sensor network perform poorly when faced with a delay-tolerant mobile network environment. Characterized by a highly dynamic and sparse topology, poor network connectivity as well as data delay-tolerance, delay-tolerant mobile sensor networks exacerbate the severe power constraints and memory limitations of nodes. This paper proposes an energy-efficient MAC protocol using dynamic queue management (EQ-MAC) for power saving and data queue management. Via data transfers initiated by the target sink and the use of a dynamic queue management strategy based on priority, EQ-MAC effectively avoids untargeted transfers, increases the chance of successful data transmission, and makes useful data reach the target terminal in a timely manner. Experimental results show that EQ-MAC has high energy efficiency in comparison with a conventional MAC protocol. It also achieves a 46% decrease in packet drop probability, 79% increase in system throughput, and 25% decrease in mean packet delay.

  14. An Energy-Efficient MAC Protocol Using Dynamic Queue Management for Delay-Tolerant Mobile Sensor Networks

    PubMed Central

    Li, Jie; Li, Qiyue; Qu, Yugui; Zhao, Baohua

    2011-01-01

    Conventional MAC protocols for wireless sensor network perform poorly when faced with a delay-tolerant mobile network environment. Characterized by a highly dynamic and sparse topology, poor network connectivity as well as data delay-tolerance, delay-tolerant mobile sensor networks exacerbate the severe power constraints and memory limitations of nodes. This paper proposes an energy-efficient MAC protocol using dynamic queue management (EQ-MAC) for power saving and data queue management. Via data transfers initiated by the target sink and the use of a dynamic queue management strategy based on priority, EQ-MAC effectively avoids untargeted transfers, increases the chance of successful data transmission, and makes useful data reach the target terminal in a timely manner. Experimental results show that EQ-MAC has high energy efficiency in comparison with a conventional MAC protocol. It also achieves a 46% decrease in packet drop probability, 79% increase in system throughput, and 25% decrease in mean packet delay. PMID:22319385

  15. Grounding explanations in evolving, diagnostic situations

    NASA Technical Reports Server (NTRS)

    Johannesen, Leila J.; Cook, Richard I.; Woods, David D.

    1994-01-01

    Certain fields of practice involve the management and control of complex dynamic systems. These include flight deck operations in commercial aviation, control of space systems, anesthetic management during surgery or chemical or nuclear process control. Fault diagnosis of these dynamic systems generally must occur with the monitored process on-line and in conjunction with maintaining system integrity.This research seeks to understand in more detail what it means for an intelligent system to function cooperatively, or as a 'team player' in complex, dynamic environments. The approach taken was to study human practitioners engaged in the management of a complex, dynamic process: anesthesiologists during neurosurgical operations. The investigation focused on understanding how team members cooperate in management and fault diagnosis and comparing this interaction to the situation with an Artificial Intelligence(AI) system that provides diagnoses and explanations. Of particular concern was to study the ways in which practitioners support one another in keeping aware of relevant information concerning the state of the monitored process and of the problem solving process.

  16. A system dynamic modeling approach for evaluating municipal solid waste generation, landfill capacity and related cost management issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollikkathara, Naushad, E-mail: naushadkp@gmail.co; Feng Huan; Yu Danlin

    2010-11-15

    As planning for sustainable municipal solid waste management has to address several inter-connected issues such as landfill capacity, environmental impacts and financial expenditure, it becomes increasingly necessary to understand the dynamic nature of their interactions. A system dynamics approach designed here attempts to address some of these issues by fitting a model framework for Newark urban region in the US, and running a forecast simulation. The dynamic system developed in this study incorporates the complexity of the waste generation and management process to some extent which is achieved through a combination of simpler sub-processes that are linked together to formmore » a whole. The impact of decision options on the generation of waste in the city, on the remaining landfill capacity of the state, and on the economic cost or benefit actualized by different waste processing options are explored through this approach, providing valuable insights into the urban waste-management process.« less

  17. A dynamic ocean management tool to reduce bycatch and support sustainable fisheries.

    PubMed

    Hazen, Elliott L; Scales, Kylie L; Maxwell, Sara M; Briscoe, Dana K; Welch, Heather; Bograd, Steven J; Bailey, Helen; Benson, Scott R; Eguchi, Tomo; Dewar, Heidi; Kohin, Suzy; Costa, Daniel P; Crowder, Larry B; Lewison, Rebecca L

    2018-05-01

    Seafood is an essential source of protein for more than 3 billion people worldwide, yet bycatch of threatened species in capture fisheries remains a major impediment to fisheries sustainability. Management measures designed to reduce bycatch often result in significant economic losses and even fisheries closures. Static spatial management approaches can also be rendered ineffective by environmental variability and climate change, as productive habitats shift and introduce new interactions between human activities and protected species. We introduce a new multispecies and dynamic approach that uses daily satellite data to track ocean features and aligns scales of management, species movement, and fisheries. To accomplish this, we create species distribution models for one target species and three bycatch-sensitive species using both satellite telemetry and fisheries observer data. We then integrate species-specific probabilities of occurrence into a single predictive surface, weighing the contribution of each species by management concern. We find that dynamic closures could be 2 to 10 times smaller than existing static closures while still providing adequate protection of endangered nontarget species. Our results highlight the opportunity to implement near real-time management strategies that would both support economically viable fisheries and meet mandated conservation objectives in the face of changing ocean conditions. With recent advances in eco-informatics, dynamic management provides a new climate-ready approach to support sustainable fisheries.

  18. The dynamic model of enterprise revenue management

    NASA Astrophysics Data System (ADS)

    Mitsel, A. A.; Kataev, M. Yu; Kozlov, S. V.; Korepanov, K. V.

    2017-01-01

    The article presents the dynamic model of enterprise revenue management. This model is based on the quadratic criterion and linear control law. The model is founded on multiple regression that links revenues with the financial performance of the enterprise. As a result, optimal management is obtained so as to provide the given enterprise revenue, namely, the values of financial indicators that ensure the planned profit of the organization are acquired.

  19. DYNAMIC MANAGEMENT EDUCATION, AN INTRODUCTION TO THE SELECTION, CREATION AND USE OF CASES, IN-BASKET EXERCISES, THE ACTION MAZE, BUSINESS GAMES AND OTHER DYNAMIC TECHNIQUES.

    ERIC Educational Resources Information Center

    ZOLL, ALLEN A., III

    MANAGEMENT TEACHERS IN BUSINESS, GOVERNMENT, OR COLLEGES CAN BE MORE CREATIVE IN THEIR TEACHING METHODS BY THINKING ABOUT EDUCATIONAL METHODS, CREATING MATERIALS BETTER SUITED TO EDUCATIONAL PURPOSES, AND EXPERIMENTING IN THE CLASSROOM WITH THE GOAL OF MAKING EDUCATION MORE EXCITING. MANAGEMENT EDUCATORS AT THE BOEING COMPANY FOUND THAT THE KEY TO…

  20. Decision Support Model for Municipal Solid Waste Management at Department of Defense Installations.

    DTIC Science & Technology

    1995-12-01

    Huang uses "Grey Dynamic Programming for Waste Management Planning Under Uncertainty." Fuzzy Dynamic Programming (FDP) is usually designed to...and Composting Programs. Washington: Island Press, 1991. Junio, D.F. Development of an Analytical Hierarchy Process ( AHP ) Model for Siting of

  1. Application of the Rangeland Hydrology and Erosion Model to Ecological Site Descriptions and Management

    USDA-ARS?s Scientific Manuscript database

    The utility of Ecological Site Descriptions (ESDs) and State-and-Transition Models (STMs) concepts in guiding rangeland management hinges on their ability to accurately describe and predict community dynamics and the associated consequences. For many rangeland ecosystems, plant community dynamics ar...

  2. Connected vehicle Data Capture and Management (DCM) and dynamic mobility applications (DMA) : assessment of relevant standards and gaps for candidate applications.

    DOT National Transportation Integrated Search

    2012-10-01

    The Connected Vehicle Mobility Standards Coordination Plan project links activities in three programs (Data Capture and Management, Dynamic Mobility Applications, and ITS Standards). The plan coordinates the timing, intent and relationship of activit...

  3. Spatial Patterns in Alternative States and Thresholds: A Missing Link for Management of Landscapes?

    USDA-ARS?s Scientific Manuscript database

    The detection of threshold dynamics (and other dynamics of interest) would benefit from explicit representations of spatial patterns of disturbance, spatial dependence in responses to disturbance, and the spatial structure of feedbacks in the design of monitoring and management strategies. Spatially...

  4. Climate effects and feedback structure determining weed population dynamics in a long-term experiment.

    PubMed

    Lima, Mauricio; Navarrete, Luis; González-Andujar, José Luis

    2012-01-01

    Pest control is one of the areas in which population dynamic theory has been successfully applied to solve practical problems. However, the links between population dynamic theory and model construction have been less emphasized in the management and control of weed populations. Most management models of weed population dynamics have emphasized the role of the endogenous process, but the role of exogenous variables such as climate have been ignored in the study of weed populations and their management. Here, we use long-term data (22 years) on two annual weed species from a locality in Central Spain to determine the importance of endogenous and exogenous processes (local and large-scale climate factors). Our modeling study determined two different feedback structures and climate effects in the two weed species analyzed. While Descurainia sophia exhibited a second-order feedback and low climate influence, Veronica hederifolia was characterized by a first-order feedback structure and important effects from temperature and rainfall. Our results strongly suggest the importance of theoretical population dynamics in understanding plant population systems. Moreover, the use of this approach, discerning between the effect of exogenous and endogenous factors, can be fundamental to applying weed management practices in agricultural systems and to controlling invasive weedy species. This is a radical change from most approaches currently used to guide weed and invasive weedy species managements.

  5. Design an optimum safety policy for personnel safety management - A system dynamic approach

    NASA Astrophysics Data System (ADS)

    Balaji, P.

    2014-10-01

    Personnel safety management (PSM) ensures that employee's work conditions are healthy and safe by various proactive and reactive approaches. Nowadays it is a complex phenomenon because of increasing dynamic nature of organisations which results in an increase of accidents. An important part of accident prevention is to understand the existing system properly and make safety strategies for that system. System dynamics modelling appears to be an appropriate methodology to explore and make strategy for PSM. Many system dynamics models of industrial systems have been built entirely for specific host firms. This thesis illustrates an alternative approach. The generic system dynamics model of Personnel safety management was developed and tested in a host firm. The model was undergone various structural, behavioural and policy tests. The utility and effectiveness of model was further explored through modelling a safety scenario. In order to create effective safety policy under resource constraint, DOE (Design of experiment) was used. DOE uses classic designs, namely, fractional factorials and central composite designs. It used to make second order regression equation which serve as an objective function. That function was optimized under budget constraint and optimum value used for safety policy which shown greatest improvement in overall PSM. The outcome of this research indicates that personnel safety management model has the capability for acting as instruction tool to improve understanding of safety management and also as an aid to policy making.

  6. Climate Effects and Feedback Structure Determining Weed Population Dynamics in a Long-Term Experiment

    PubMed Central

    Lima, Mauricio; Navarrete, Luis; González-Andujar, José Luis

    2012-01-01

    Pest control is one of the areas in which population dynamic theory has been successfully applied to solve practical problems. However, the links between population dynamic theory and model construction have been less emphasized in the management and control of weed populations. Most management models of weed population dynamics have emphasized the role of the endogenous process, but the role of exogenous variables such as climate have been ignored in the study of weed populations and their management. Here, we use long-term data (22 years) on two annual weed species from a locality in Central Spain to determine the importance of endogenous and exogenous processes (local and large-scale climate factors). Our modeling study determined two different feedback structures and climate effects in the two weed species analyzed. While Descurainia sophia exhibited a second-order feedback and low climate influence, Veronica hederifolia was characterized by a first-order feedback structure and important effects from temperature and rainfall. Our results strongly suggest the importance of theoretical population dynamics in understanding plant population systems. Moreover, the use of this approach, discerning between the effect of exogenous and endogenous factors, can be fundamental to applying weed management practices in agricultural systems and to controlling invasive weedy species. This is a radical change from most approaches currently used to guide weed and invasive weedy species managements. PMID:22272362

  7. Measuring, managing and maximizing refinery performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bascur, O.A.; Kennedy, J.P.

    1996-01-01

    Implementing continuous quality improvement is a confluence of total quality management, people empowerment, performance indicators and information engineering. Supporting information technologies allow a refiner to narrow the gap between management objectives and the process control level. Dynamic performance monitoring benefits come from production cost savings, improved communications and enhanced decision making. A refinery workgroup information flow model helps automate continuous improvement of processes, performance and the organization. The paper discusses the rethinking of refinery operations, dynamic performance monitoring, continuous process improvement, the knowledge coordinator and repository manager, an integrated plant operations workflow, and successful implementation.

  8. Development of working hypotheses linking management of the Missouri River to population dynamics of Scaphirhynchus albus (pallid sturgeon)

    USGS Publications Warehouse

    Jacobson, Robert B.; Parsley, Michael J.; Annis, Mandy L.; Colvin, Michael E.; Welker, Timothy L.; James, Daniel A.

    2016-01-20

    The initial set of candidate hypotheses provides a useful starting point for quantitative modeling and adaptive management of the river and species. We anticipate that hypotheses will change from the set of working management hypotheses as adaptive management progresses. More importantly, hypotheses that have been filtered out of our multistep process are still being considered. These filtered hypotheses are archived and if existing hypotheses are determined to be inadequate to explain observed population dynamics, new hypotheses can be created or filtered hypotheses can be reinstated.

  9. DYNAMIC ELECTRICITY GENERATION FOR ADDRESSING DAILY AIR QUALITY EXCEEDANCES IN THE US

    EPA Science Inventory

    We will design, demonstrate, and evaluate a dynamic management system for managing daily air quality, exploring different elements of the design of this system such as how air quality forecasts can best be used, and decision rules for the electrical dispatch model. We will ...

  10. Strategic planning: health plan perspective.

    PubMed

    Mills, P S

    1990-01-01

    The managed care industry is one of the most dynamic industries in the health care business. The development of new products, formation of alliances, changes in legislation and other types of changes are regular occurrences. This kind of dynamic environment makes it more important than ever to use strategic planning to guide management decisions.

  11. The Immuno-Dynamics of Conflict Intervention in Social Systems

    PubMed Central

    Krakauer, David C.; Page, Karen; Flack, Jessica

    2011-01-01

    We present statistical evidence and dynamical models for the management of conflict and a division of labor (task specialization) in a primate society. Two broad intervention strategy classes are observed– a dyadic strategy – pacifying interventions, and a triadic strategy –policing interventions. These strategies, their respective degrees of specialization, and their consequences for conflict dynamics can be captured through empirically-grounded mathematical models inspired by immuno-dynamics. The spread of aggression, analogous to the proliferation of pathogens, is an epidemiological problem. We show analytically and computationally that policing is an efficient strategy as it requires only a small proportion of a population to police to reduce conflict contagion. Policing, but not pacifying, is capable of effectively eliminating conflict. These results suggest that despite implementation differences there might be universal features of conflict management mechanisms for reducing contagion-like dynamics that apply across biological and social levels. Our analyses further suggest that it can be profitable to conceive of conflict management strategies at the behavioral level as mechanisms of social immunity. PMID:21887221

  12. The immuno-dynamics of conflict intervention in social systems.

    PubMed

    Krakauer, David C; Page, Karen; Flack, Jessica

    2011-01-01

    We present statistical evidence and dynamical models for the management of conflict and a division of labor (task specialization) in a primate society. Two broad intervention strategy classes are observed--a dyadic strategy--pacifying interventions, and a triadic strategy--policing interventions. These strategies, their respective degrees of specialization, and their consequences for conflict dynamics can be captured through empirically-grounded mathematical models inspired by immuno-dynamics. The spread of aggression, analogous to the proliferation of pathogens, is an epidemiological problem. We show analytically and computationally that policing is an efficient strategy as it requires only a small proportion of a population to police to reduce conflict contagion. Policing, but not pacifying, is capable of effectively eliminating conflict. These results suggest that despite implementation differences there might be universal features of conflict management mechanisms for reducing contagion-like dynamics that apply across biological and social levels. Our analyses further suggest that it can be profitable to conceive of conflict management strategies at the behavioral level as mechanisms of social immunity.

  13. A distributed scheme to manage the dynamic coexistence of IEEE 802.15.4-based health-monitoring WBANs.

    PubMed

    Deylami, Mohammad N; Jovanov, Emil

    2014-01-01

    The overlap of transmission ranges between wireless networks as a result of mobility is referred to as dynamic coexistence. The interference caused by coexistence may significantly affect the performance of wireless body area networks (WBANs) where reliability is particularly critical for health monitoring applications. In this paper, we analytically study the effects of dynamic coexistence on the operation of IEEE 802.15.4-based health monitoring WBANs. The current IEEE 802.15.4 standard lacks mechanisms for effectively managing the coexistence of mobile WBANs. Considering the specific characteristics and requirements of health monitoring WBANs, we propose the dynamic coexistence management (DCM) mechanism to make IEEE 802.15.4-based WBANs able to detect and mitigate the harmful effects of coexistence. We assess the effectiveness of this scheme using extensive OPNET simulations. Our results indicate that DCM improves the successful transmission rates of dynamically coexisting WBANs by 20%-25% for typical medical monitoring applications.

  14. Software-Engineering Process Simulation (SEPS) model

    NASA Technical Reports Server (NTRS)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  15. Dynamic Resource Allocation in Disaster Response: Tradeoffs in Wildfire Suppression

    DTIC Science & Technology

    2012-04-13

    S, Martı́nez-Falero E, Pérez-González JM (2002) Optimiza- tion of the resources management in fighting wildfires . Environmental Management 30: 352...Dynamic Resource Allocation in Disaster Response: Tradeoffs in Wildfire Suppression Nada Petrovic1*, David L. Alderson2, Jean M. Carlson3 1Center for...inspire fundamentally new theoretical questions for dynamic decision making in coupled human and natural systems. Wildfires are one of several types of

  16. A system management methodology for building successful resource management systems

    NASA Technical Reports Server (NTRS)

    Hornstein, Rhoda Shaller; Willoughby, John K.

    1989-01-01

    This paper presents a system management methodology for building successful resource management systems that possess lifecycle effectiveness. This methodology is based on an analysis of the traditional practice of Systems Engineering Management as it applies to the development of resource management systems. The analysis produced fifteen significant findings presented as recommended adaptations to the traditional practice of Systems Engineering Management to accommodate system development when the requirements are incomplete, unquantifiable, ambiguous and dynamic. Ten recommended adaptations to achieve operational effectiveness when requirements are incomplete, unquantifiable or ambiguous are presented and discussed. Five recommended adaptations to achieve system extensibility when requirements are dynamic are also presented and discussed. The authors conclude that the recommended adaptations to the traditional practice of Systems Engineering Management should be implemented for future resource management systems and that the technology exists to build these systems extensibly.

  17. Design of a system based on DSP and FPGA for video recording and replaying

    NASA Astrophysics Data System (ADS)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA for video recording and replaying has a considerable perspective in analysis after the event, simulated exercitation and so forth.

  18. The dynamics of software development project management: An integrative systems dynamic perspective

    NASA Technical Reports Server (NTRS)

    Vandervelde, W. E.; Abdel-Hamid, T.

    1984-01-01

    Rather than continuing to focus on software development projects per se, the system dynamics modeling approach outlined is extended to investigate a broader set of issues pertaining to the software development organization. Rather than trace the life cycle(s) of one or more software projects, the focus is on the operations of a software development department as a continuous stream of software products are developed, placed into operation, and maintained. A number of research questions are ""ripe'' for investigating including: (1) the efficacy of different organizational structures in different software development environments, (2) personnel turnover, (3) impact of management approaches such as management by objectives, and (4) the organizational/environmental determinants of productivity.

  19. Dynamical stabilization of grazing systems: An interplay among plant-water interaction, overgrazing and a threshold management policy.

    PubMed

    Costa, Michel Iskin da Silveira; Meza, Magno Enrique Mendoza

    2006-12-01

    In a plant-herbivore system, a management strategy called threshold policy is proposed to control grazing intensity where the vegetation dynamics is described by a plant-water interaction model. It is shown that this policy can lead the vegetation density to a previously chosen level under an overgrazing regime. This result is obtained despite both the potential occurrence of vegetation collapse due to overgrazing and the possibility of complex dynamics sensitive to vegetation initial densities and parameter uncertainties.

  20. Optimizing the Betts-Miller-Janjic cumulus parameterization with Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.-L.

    2015-10-01

    The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.

  1. High-speed assembly language (80386/80387) programming for laser spectra scan control and data acquisition providing improved resolution water vapor spectroscopy

    NASA Technical Reports Server (NTRS)

    Allen, Robert J.

    1988-01-01

    An assembly language program using the Intel 80386 CPU and 80387 math co-processor chips was written to increase the speed of data gathering and processing, and provide control of a scanning CW ring dye laser system. This laser system is used in high resolution (better than 0.001 cm-1) water vapor spectroscopy experiments. Laser beam power is sensed at the input and output of white cells and the output of a Fabry-Perot. The assembly language subroutine is called from Basic, acquires the data and performs various calculations at rates greater than 150 faster than could be performed by the higher level language. The width of output control pulses generated in assembly language are 3 to 4 microsecs as compared to 2 to 3.7 millisecs for those generated in Basic (about 500 to 1000 times faster). Included are a block diagram and brief description of the spectroscopy experiment, a flow diagram of the Basic and assembly language programs, listing of the programs, scope photographs of the computer generated 5-volt pulses used for control and timing analysis, and representative water spectrum curves obtained using these programs.

  2. An ultra-compact processor module based on the R3000

    NASA Astrophysics Data System (ADS)

    Mullenhoff, D. J.; Kaschmitter, J. L.; Lyke, J. C.; Forman, G. A.

    1992-08-01

    Viable high density packaging is of critical importance for future military systems, particularly space borne systems which require minimum weight and size and high mechanical integrity. A leading, emerging technology for high density packaging is multi-chip modules (MCM). During the 1980's, a number of different MCM technologies have emerged. In support of Strategic Defense Initiative Organization (SDIO) programs, Lawrence Livermore National Laboratory (LLNL) has developed, utilized, and evaluated several different MCM technologies. Prior LLNL efforts include modules developed in 1986, using hybrid wafer scale packaging, which are still operational in an Air Force satellite mission. More recent efforts have included very high density cache memory modules, developed using laser pantography. As part of the demonstration effort, LLNL and Phillips Laboratory began collaborating in 1990 in the Phase 3 Multi-Chip Module (MCM) technology demonstration project. The goal of this program was to demonstrate the feasibility of General Electric's (GE) High Density Interconnect (HDI) MCM technology. The design chosen for this demonstration was the processor core for a MIPS R3000 based reduced instruction set computer (RISC), which has been described previously. It consists of the R3000 microprocessor, R3010 floating point coprocessor and 128 Kbytes of cache memory.

  3. Three-dimensional object recognition based on planar images

    NASA Astrophysics Data System (ADS)

    Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.

    1993-01-01

    This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.

  4. System-on-chip architecture and validation for real-time transceiver optimization: APC implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Suarez, Hernan; Zhang, Yan R.

    2015-05-01

    New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.

  5. BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.

    1994-01-01

    The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.

  6. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - AMS Testbed Detailed Requirements

    DOT National Transportation Integrated Search

    2016-04-20

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  7. Global sensitivity and uncertainty analysis of the nitrate leaching and crop yield simulation under different water and nitrogen management practices

    USDA-ARS?s Scientific Manuscript database

    Agricultural system models have become important tools in studying water and nitrogen (N) dynamics, as well as crop growth, under different management practices. Complexity in input parameters often leads to significant uncertainty when simulating dynamic processes such as nitrate leaching or crop y...

  8. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - Chicago testbed analysis plan.

    DOT National Transportation Integrated Search

    2016-10-01

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  9. Stand dynamics in 60-year-old Allegheny hardwoods after thinning

    Treesearch

    Gary W. Miller

    1997-01-01

    Stand dynamics and tree growth in even-aged hardwood stands can be influenced by manipulating relative stand density, species composition, and stand structure. Land managers need quantitative information on the effect of vegetation manipulation to prescribe stand treatments that are appropriate for specific management objectives. Sixty-year-old stands composed of black...

  10. 77 FR 44684 - General Dynamics Itronix Corporation; A Subsidiary of General Dynamics Corporation, Including...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-30

    ... program management services for rugged laptop computers and rugged mobile devices. The worker group... of program management services for rugged laptop computers and rugged mobile devices, meet the worker... threatened to become totally or partially separated. Section 222(a)(2)(A)(i) has been met because the sales...

  11. Ponderosa pine forest structure and northern goshawk reproduction: Response to Beier et al

    Treesearch

    Richard T. Reynolds; Douglas A. Boyce; Russell T. Graham

    2012-01-01

    Ecosystem-based forest management requires long planning horizons to incorporate forest dynamics - changes resulting from vegetation growth and succession and the periodic resetting of these by natural and anthropogenic disturbances such as fire, wind, insects, and timber harvests. Given these dynamics, ecosystem-based forest management plans should specify desired...

  12. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - evaluation plan : draft report.

    DOT National Transportation Integrated Search

    2016-07-13

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  13. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs — evaluation summary for DMA program.

    DOT National Transportation Integrated Search

    2017-07-04

    The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of Dynamic Mobility Application (DMA) connected vehicle applications and Active Transportation and Demand management (ATDM)...

  14. Assessing the dynamics of the upper soil layer relative to soil management practices

    USDA-ARS?s Scientific Manuscript database

    The upper layer of the soil is the critical interface between the soil and the atmosphere and is the most dynamic in response to management practices. One of the soil properties is the stability of the aggregates because this property controls infiltration of water and exchange of gases. An aggregat...

  15. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - San Diego calibration report.

    DOT National Transportation Integrated Search

    2016-10-01

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  16. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs : summary report for the Chicago testbed.

    DOT National Transportation Integrated Search

    2017-04-01

    The primary objective of this project is to develop multiple simulation testbeds and transportation models to evaluate the impacts of Connected Vehicle Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) strateg...

  17. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs : Evaluation Report for the Chicago Testbed

    DOT National Transportation Integrated Search

    2017-04-01

    The primary objective of this project is to develop multiple simulation testbeds and transportation models to evaluate the impacts of Connected Vehicle Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) strateg...

  18. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs — Chicago calibration report.

    DOT National Transportation Integrated Search

    2016-10-01

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  19. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - Pasadena testbed analysis plan : final report.

    DOT National Transportation Integrated Search

    2016-06-30

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  20. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs — evaluation report for ATDM program.

    DOT National Transportation Integrated Search

    2017-07-16

    The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of Dynamic Mobility Applications (DMA) and the Active Transportation and Demand Management (ATDM) strategies. Specifically,...

  1. Learning to Manage Intergroup Dynamics in Changing Task Environments: An Experiential Exercise

    ERIC Educational Resources Information Center

    Hunsaker, Phillip L.

    2004-01-01

    This article describes an exercise that allows participants to experience the challenges of managing intergroup behavior as an organization's task environment grows and becomes more complex. The article begins with a brief review of models and concepts relating to intergroup dynamics, intergroup conflict, and interventions for effectively managing…

  2. Debriefing Can Reduce Misperceptions of Feedback: The Case of Renewable Resource Management

    ERIC Educational Resources Information Center

    Qudrat-Ullah, Hassan

    2007-01-01

    According to the hypothesis of misperception of feedback, people's poor performance in renewable resource management tasks can be attributed to their general tendency to systematically misperceive the dynamics of bioeconomic systems. The thesis of this article is that dynamic decision performance can be improved by helping individuals develop more…

  3. Incentives and Their Dynamics in Public Sector Performance Management Systems

    ERIC Educational Resources Information Center

    Heinrich, Carolyn J.; Marschke, Gerald

    2010-01-01

    We use the principal-agent model as a focal theoretical frame for synthesizing what we know, both theoretically and empirically, about the design and dynamics of the implementation of performance management systems in the public sector. In this context, we review the growing body of evidence about how performance measurement and incentive systems…

  4. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - San Diego testbed analysis plan.

    DOT National Transportation Integrated Search

    2016-10-01

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  5. UAB UTC domain 2 : development of a dynamic traffic assignment and simulation model for incident and emergency management applications in the Birmingham Region (Aim 1).

    DOT National Transportation Integrated Search

    2010-12-01

    A number of initiatives were undertaken to support education, training, and technology transfer objectives related to UAB UTC Domain 2 Project: Development of a Dynamic Traffic Assignment and Simulation Model for Incident and Emergency Management App...

  6. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - AMS Testbed Selection Criteria

    DOT National Transportation Integrated Search

    2016-06-16

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  7. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs : Dallas testbed analysis plan.

    DOT National Transportation Integrated Search

    2016-06-16

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate theimpacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM)strategies. The outputs (mo...

  8. Conducting Qualitative Data Analysis: Managing Dynamic Tensions within

    ERIC Educational Resources Information Center

    Chenail, Ronald J.

    2012-01-01

    In the third of a series of "how-to" essays on conducting qualitative data analysis, Ron Chenail examines the dynamic tensions within the process of qualitative data analysis that qualitative researchers must manage in order to produce credible and creative results. These tensions include (a) the qualities of the data and the qualitative data…

  9. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs — evaluation report for DMA program.

    DOT National Transportation Integrated Search

    2017-02-02

    The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  10. YOUNG ADULT DATING RELATIONSHIPS AND THE MANAGEMENT OF SEXUAL RISK

    PubMed Central

    Manning, Wendy D.; Giordano, Peggy C.; Longmore, Monica A.; Flanigan, Christine M.

    2012-01-01

    Young adult involvement in sexual behavior typically occurs within a relationship context, but we know little about the ways in which specific features of romantic relationships influence sexual decision-making. Prior work on sexual risk taking focuses attention on health issues rather than relationship dynamics. We draw on data from the Toledo Adolescent Relationships Study (TARS) (n = 475) to examine the association between qualities and dynamics of current/most recent romantic relationships such as communication and emotional processes, conflict, demographic asymmetries, and duration and the management of sexual risk. We conceptualize ‘risk management’ as encompassing multiple domains, including (1) questioning the partner about previous sexual behaviors/risks, (2) using condoms consistently, and (3) maintaining sexual exclusivity within the relationship. We identify distinct patterns of risk management among dating young adults and find that specific qualities and dynamics of these relationships are linked to variations in risk management. Results from this paper suggest the need to consider relational dynamics in efforts to target and influence young adult sexual risk-taking and reduce STIs, including HIV. PMID:23805015

  11. Assessing the role of informal sector in WEEE management systems: A System Dynamics approach.

    PubMed

    Ardi, Romadhani; Leisten, Rainer

    2016-11-01

    Generally being ignored by academia and regulators, the informal sector plays important roles in Waste Electrical and Electronic Equipment (WEEE) management systems, especially in developing countries. This study aims: (1) to capture and model the variety of informal operations in WEEE management systems, (2) to capture the dynamics existing within the informal sector, and (3) to assess the role of the informal sector as the key player in the WEEE management systems, influencing both its future operations and its counterpart, the formal sector. By using System Dynamics as the methodology and India as the reference system, this study is able to explain the reasons behind, on the one hand, the superiority of the informal sector in WEEE management systems and, on the other hand, the failure of the formal systems. Additionally, this study reveals the important role of the second-hand market as the determinant of the rise and fall of the informal sector in the future. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Integrated wetland management: an analysis with group model building based on system dynamics model.

    PubMed

    Chen, Hsin; Chang, Yang-Chi; Chen, Kung-Chen

    2014-12-15

    The wetland system possesses diverse functions such as preserving water sources, mediating flooding, providing habitats for wildlife and stabilizing coastlines. Nonetheless, rapid economic growth and the increasing population have significantly deteriorated the wetland environment. To secure the sustainability of the wetland, it is essential to introduce integrated and systematic management. This paper examines the resource management of the Jiading Wetland by applying group model building (GMB) and system dynamics (SD). We systematically identify local stakeholders' mental model regarding the impact brought by the yacht industry, and further establish a SD model to simulate the dynamic wetland environment. The GMB process improves the stakeholders' understanding about the interaction between the wetland environment and management policies. Differences between the stakeholders' perceptions and the behaviors shown by the SD model also suggest that our analysis would facilitate the stakeholders to broaden their horizons and achieve consensus on the wetland resource management. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. A Computerized Asthma Outcomes Measure Is Feasible for Disease Management.

    PubMed

    Turner-Bowker, Diane M; Saris-Baglama, Renee N; Anatchkova, Milena; Mosen, David M

    2010-04-01

    OBJECTIVE: To develop and test an online assessment referred to as the ASTHMA-CAT (computerized adaptive testing), a patient-based asthma impact, control, and generic health-related quality of life (HRQOL) measure. STUDY DESIGN: Cross-sectional pilot study of the ASTHMA-CAT's administrative feasibility in a disease management population. METHODS: The ASTHMA-CAT included a dynamic or static Asthma Impact Survey (AIS), Asthma Control Test, and SF-8 Health Survey. A sample of clinician-diagnosed adult asthmatic patients (N = 114) completed the ASTHMA-CAT. Results were used to evaluate administrative feasibility of the instrument and psychometric performance of the dynamic AIS relative to the static AIS. A prototype aggregate (group-level) report was developed and reviewed by care providers. RESULTS: Online administration of the ASTHMA-CAT was feasible for patients in disease management. The dynamic AIS functioned well compared with the static AIS in preliminary studies evaluating response burden, precision, and validity. Providers found reports to be relevant, useful, and applicable for care management. CONCLUSION: The ASTHMA-CAT may facilitate asthma care management.

  14. From network heterogeneities to familiarity detection and hippocampal memory management

    PubMed Central

    Wang, Jane X.; Poe, Gina; Zochowski, Michal

    2009-01-01

    Hippocampal-neocortical interactions are key to the rapid formation of novel associative memories in the hippocampus and consolidation to long term storage sites in the neocortex. We investigated the role of network correlates during information processing in hippocampal-cortical networks. We found that changes in the intrinsic network dynamics due to the formation of structural network heterogeneities alone act as a dynamical and regulatory mechanism for stimulus novelty and familiarity detection, thereby controlling memory management in the context of memory consolidation. This network dynamic, coupled with an anatomically established feedback between the hippocampus and the neocortex, recovered heretofore unexplained properties of neural activity patterns during memory management tasks which we observed during sleep in multiunit recordings from behaving animals. Our simple dynamical mechanism shows an experimentally matched progressive shift of memory activation from the hippocampus to the neocortex and thus provides the means to achieve an autonomous off-line progression of memory consolidation. PMID:18999453

  15. Research on Occupational Safety, Health Management and Risk Control Technology in Coal Mines.

    PubMed

    Zhou, Lu-Jie; Cao, Qing-Gui; Yu, Kai; Wang, Lin-Lin; Wang, Hai-Bin

    2018-04-26

    This paper studies the occupational safety and health management methods as well as risk control technology associated with the coal mining industry, including daily management of occupational safety and health, identification and assessment of risks, early warning and dynamic monitoring of risks, etc.; also, a B/S mode software (Geting Coal Mine, Jining, Shandong, China), i.e., Coal Mine Occupational Safety and Health Management and Risk Control System, is developed to attain the aforementioned objectives, namely promoting the coal mine occupational safety and health management based on early warning and dynamic monitoring of risks. Furthermore, the practical effectiveness and the associated pattern for applying this software package to coal mining is analyzed. The study indicates that the presently developed coal mine occupational safety and health management and risk control technology and the associated software can support the occupational safety and health management efforts in coal mines in a standardized and effective manner. It can also control the accident risks scientifically and effectively; its effective implementation can further improve the coal mine occupational safety and health management mechanism, and further enhance the risk management approaches. Besides, its implementation indicates that the occupational safety and health management and risk control technology has been established based on a benign cycle involving dynamic feedback and scientific development, which can provide a reliable assurance to the safe operation of coal mines.

  16. Research on Occupational Safety, Health Management and Risk Control Technology in Coal Mines

    PubMed Central

    Zhou, Lu-jie; Cao, Qing-gui; Yu, Kai; Wang, Lin-lin; Wang, Hai-bin

    2018-01-01

    This paper studies the occupational safety and health management methods as well as risk control technology associated with the coal mining industry, including daily management of occupational safety and health, identification and assessment of risks, early warning and dynamic monitoring of risks, etc.; also, a B/S mode software (Geting Coal Mine, Jining, Shandong, China), i.e., Coal Mine Occupational Safety and Health Management and Risk Control System, is developed to attain the aforementioned objectives, namely promoting the coal mine occupational safety and health management based on early warning and dynamic monitoring of risks. Furthermore, the practical effectiveness and the associated pattern for applying this software package to coal mining is analyzed. The study indicates that the presently developed coal mine occupational safety and health management and risk control technology and the associated software can support the occupational safety and health management efforts in coal mines in a standardized and effective manner. It can also control the accident risks scientifically and effectively; its effective implementation can further improve the coal mine occupational safety and health management mechanism, and further enhance the risk management approaches. Besides, its implementation indicates that the occupational safety and health management and risk control technology has been established based on a benign cycle involving dynamic feedback and scientific development, which can provide a reliable assurance to the safe operation of coal mines. PMID:29701715

  17. Linking river management to species conservation using dynamic landscape scale models

    USGS Publications Warehouse

    Freeman, Mary C.; Buell, Gary R.; Hay, Lauren E.; Hughes, W. Brian; Jacobson, Robert B.; Jones, John W.; Jones, S.A.; LaFontaine, Jacob H.; Odom, Kenneth R.; Peterson, James T.; Riley, Jeffrey W.; Schindler, J. Stephen; Shea, C.; Weaver, J.D.

    2013-01-01

    Efforts to conserve stream and river biota could benefit from tools that allow managers to evaluate landscape-scale changes in species distributions in response to water management decisions. We present a framework and methods for integrating hydrology, geographic context and metapopulation processes to simulate effects of changes in streamflow on fish occupancy dynamics across a landscape of interconnected stream segments. We illustrate this approach using a 482 km2 catchment in the southeastern US supporting 50 or more stream fish species. A spatially distributed, deterministic and physically based hydrologic model is used to simulate daily streamflow for sub-basins composing the catchment. We use geographic data to characterize stream segments with respect to channel size, confinement, position and connectedness within the stream network. Simulated streamflow dynamics are then applied to model fish metapopulation dynamics in stream segments, using hypothesized effects of streamflow magnitude and variability on population processes, conditioned by channel characteristics. The resulting time series simulate spatially explicit, annual changes in species occurrences or assemblage metrics (e.g. species richness) across the catchment as outcomes of management scenarios. Sensitivity analyses using alternative, plausible links between streamflow components and metapopulation processes, or allowing for alternative modes of fish dispersal, demonstrate large effects of ecological uncertainty on model outcomes and highlight needed research and monitoring. Nonetheless, with uncertainties explicitly acknowledged, dynamic, landscape-scale simulations may prove useful for quantitatively comparing river management alternatives with respect to species conservation.

  18. System approach to distributed sensor management

    NASA Astrophysics Data System (ADS)

    Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid

    2010-04-01

    Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.

  19. A dynamic model for assessing the effects of management strategies on the reduction of construction and demolition waste.

    PubMed

    Yuan, Hongping; Chini, Abdol R; Lu, Yujie; Shen, Liyin

    2012-03-01

    During the past few decades, construction and demolition (C&D) waste has received increasing attention from construction practitioners and researchers worldwide. A plethora of research regarding C&D waste management has been published in various academic journals. However, it has been determined that existing studies with respect to C&D waste reduction are mainly carried out from a static perspective, without considering the dynamic and interdependent nature of the whole waste reduction system. This might lead to misunderstanding about the actual effect of implementing any waste reduction strategies. Therefore, this research proposes a model that can serve as a decision support tool for projecting C&D waste reduction in line with the waste management situation of a given construction project, and more importantly, as a platform for simulating effects of various management strategies on C&D waste reduction. The research is conducted using system dynamics methodology, which is a systematic approach that deals with the complexity - interrelationships and dynamics - of any social, economic and managerial system. The dynamic model integrates major variables that affect C&D waste reduction. In this paper, seven causal loop diagrams that can deepen understanding about the feedback relationships underlying C&D waste reduction system are firstly presented. Then a stock-flow diagram is formulated by using software for system dynamics modeling. Finally, a case study is used to illustrate the validation and application of the proposed model. Results of the case study not only built confidence in the model so that it can be used for quantitative analysis, but also assessed and compared the effect of three designed policy scenarios on C&D waste reduction. One major contribution of this study is the development of a dynamic model for evaluating C&D waste reduction strategies under various scenarios, so that best management strategies could be identified before being implemented in practice. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Using a full annual cycle model to evaluate long-term population viability of the conservation-reliant Kirtland's warbler after successful recovery

    USGS Publications Warehouse

    Brown, Donald J.; Ribic, Christine; Donner, Deahn M.; Nelson, Mark D.; Bocetti, Carol I.; Deloria-Sheffield, Christie M.

    2017-01-01

    Long-term management planning for conservation-reliant migratory songbirds is particularly challenging because habitat quality in different stages and geographic locations of the annual cycle can have direct and carry-over effects that influence the population dynamics. The Neotropical migratory songbird Kirtland's warbler Setophaga kirtlandii (Baird 1852) is listed as endangered under the U.S. Endangered Species Act and Near Threatened under the IUCN Red List. This conservation-reliant species is being considered for U.S. federal delisting because the species has surpassed the designated 1000 breeding pairs recovery threshold since 2001.To help inform the delisting decision and long-term management efforts, we developed a population simulation model for the Kirtland's warbler that incorporated both breeding and wintering grounds habitat dynamics, and projected population viability based on current environmental conditions and potential future management scenarios. Future management scenarios included the continuation of current management conditions, reduced productivity and carrying capacity due to the changes in habitat suitability from the creation of experimental jack pine Pinus banksiana (Lamb.) plantations, and reduced productivity from alteration of the brown-headed cowbird Molothrus ater (Boddaert 1783) removal programme.Linking wintering grounds precipitation to productivity improved the accuracy of the model for replicating past observed population dynamics. Our future simulations indicate that the Kirtland's warbler population is stable under two potential future management scenarios: (i) continuation of current management practices and (ii) spatially restricting cowbird removal to the core breeding area, assuming that cowbirds reduce productivity in the remaining patches by ≤41%. The additional future management scenarios we assessed resulted in population declines.Synthesis and applications. Our study indicates that the Kirtland's warbler population is stable under current management conditions and that the jack pine plantation and cowbird removal programmes continue to be necessary for the long-term persistence of the species. This study represents one of the first attempts to incorporate full annual cycle dynamics into a population viability analysis for a migratory bird, and our results indicate that incorporating wintering grounds dynamics improved the model performance.

  1. The Seasonal Dynamics of Artificial Nest Predation Rates along Edges in a Mosaic Managed Reedbed.

    PubMed

    Malzer, Iain; Helm, Barbara

    2015-01-01

    Boundaries between different habitats can be responsible for changes in species interactions, including modified rates of encounter between predators and prey. Such 'edge effects' have been reported in nesting birds, where nest predation rates can be increased at habitat edges. The literature concerning edge effects on nest predation rates reveals a wide variation in results, even within single habitats, suggesting edge effects are not fixed, but dynamic throughout space and time. This study demonstrates the importance of considering dynamic mechanisms underlying edge effects and their relevance when undertaking habitat management. In reedbed habitats, management in the form of mosaic winter reed cutting can create extensive edges which change rapidly with reed regrowth during spring. We investigate the seasonal dynamics of reedbed edges using an artificial nest experiment based on the breeding biology of a reedbed specialist. We first demonstrate that nest predation decreases with increasing distance from the edge of cut reed blocks, suggesting edge effects have a pivotal role in this system. Using repeats throughout the breeding season we then confirm that nest predation rates are temporally dynamic and decline with the regrowth of reed. However, effects of edges on nest predation were consistent throughout the season. These results are of practical importance when considering appropriate habitat management, suggesting that reed cutting may heighten nest predation, especially before new growth matures. They also contribute directly to an overall understanding of the dynamic processes underlying edge effects and their potential role as drivers of time-dependent habitat use.

  2. A dynamic traction splint for the management of extrinsic tendon tightness.

    PubMed

    Dovelle, S; Heeter, P K; Phillips, P D

    1987-02-01

    The dynamic traction splint designed by therapists at Walter Reed Army Medical Center is used for the management of extrinsic extensor tendon tightness commonly seen in brachial plexus injuries and traumatic soft tissue injuries of the upper extremity. The two components of the splint allow for simultaneous maximum flexion of the MCP and IP joints. This simple and economical splint provides an additional modality to any occupational therapy service involved in the management of upper extremity disorders.

  3. Exploring the Dynamics and Modeling National Budget as a Supply Chain System: A Proposal for Reengineering the Budgeting Process and for Developing a Management Flight Simulator

    DTIC Science & Technology

    2012-09-01

    Elmendorf, D. W., & Gregory Mankiw , N. (1999). Government debt. Handbook of Macroeconomics , 1, 1615-1669. European Union. European financial stability...budget process, based on the supply chain demand management process principles of operations and it is introduced the idea of developing a Budget... principles of systems dynamics, a proposal for the development of a Budget Management Flight Simulator, that will operate as a learning and educational

  4. A dynamical framework for integrated corridor management.

    DOT National Transportation Integrated Search

    2016-01-11

    We develop analysis and control synthesis tools for dynamic traffic flow over networks. Our analysis : relies on exploiting monotonicity properties of the dynamics, and on adapting relevant tools from : stochastic queuing networks. We develop proport...

  5. A biologically-based individual tree model for managing the longleaf pine ecosystem

    Treesearch

    Rick Smith; Greg Somers

    1998-01-01

    Duration: 1995-present Objective: Develop a longleaf pine dynamics model and simulation system to define desirable ecosystem management practices in existing and future longleaf pine stands. Methods: Naturally-regenerated longleaf pine trees are being destructively sampled to measure their recent growth and dynamics. Soils and climate data will be combined with the...

  6. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs — Calibration Report for San Mateo Testbed.

    DOT National Transportation Integrated Search

    2016-08-22

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  7. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - calibration report for Dallas testbed : final report.

    DOT National Transportation Integrated Search

    2016-10-01

    The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  8. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs — gaps, challenges and future research.

    DOT National Transportation Integrated Search

    2017-05-01

    The primary objective of AMS project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. Through this p...

  9. Algorithms for Data Sharing, Coordination, and Communication in Dynamic Network Settings

    DTIC Science & Technology

    2007-12-03

    problems in dynamic networks, focusing on mobile networks with wireless communication. Problems studied include data management, time synchronization ...The discovery of a fundamental limitation in capabilities for time synchronization in large networks. (2) The identification and development of the...Problems studied include data management, time synchronization , communication problems (broadcast, geocast, and point-to-point routing), distributed

  10. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - Evaluation Report for the San Diego Testbed

    DOT National Transportation Integrated Search

    2017-07-01

    The primary objective of this project is to develop multiple simulation testbeds and transportation models to evaluate the impacts of Connected Vehicle Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) strateg...

  11. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs : Evaluation Report for the San Diego Testbed : Draft Report.

    DOT National Transportation Integrated Search

    2017-07-01

    The primary objective of this project is to develop multiple simulation testbeds and transportation models to evaluate the impacts of Connected Vehicle Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) strateg...

  12. Managed lane operations--adjusted time of day pricing vs. near-real time dynamic pricing : volume I, dynamic pricing and operations of managed lanes.

    DOT National Transportation Integrated Search

    2012-02-12

    In 2008, the Florida Department of Transportation began implementing the 95 Express, a segment of I-95 in Miami with high occupancy toll (HOT) lanes. Some vehicles use HOT lanes free, but most vehicles pay a toll based on real-time traffic conditions...

  13. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - calibration Report for Phoenix Testbed : Final Report.

    DOT National Transportation Integrated Search

    2016-10-01

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  14. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - evaluation summary for the San Diego testbed

    DOT National Transportation Integrated Search

    2017-08-01

    The primary objective of this project is to develop multiple simulation testbeds and transportation models to evaluate the impacts of Connected Vehicle Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) strateg...

  15. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic applications (DMA) and active transportation and demand management (ATDM) programs — leveraging AMS testbed outputs for ATDM analysis – a primer.

    DOT National Transportation Integrated Search

    2017-08-01

    The primary objective of AMS Testbed project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. Throug...

  16. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - San Mateo Testbed Analysis Plan : Final Report.

    DOT National Transportation Integrated Search

    2016-06-29

    The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  17. Can market-based policies accomplish the optimal floodplain management? A gap between static and dynamic models.

    PubMed

    Mori, Koichiro

    2009-02-01

    The purpose of this short article is to set static and dynamic models for optimal floodplain management and to compare policy implications from the models. River floodplains are important multiple resources in that they provide various ecosystem services. It is fundamentally significant to consider environmental externalities that accrue from ecosystem services of natural floodplains. There is an interesting gap between static and dynamic models about policy implications for floodplain management, although they are based on the same assumptions. Essentially, we can derive the same optimal conditions, which imply that the marginal benefits must equal the sum of the marginal costs and the social external costs related to ecosystem services. Thus, we have to internalise the external costs by market-based policies. In this respect, market-based policies seem to be effective in a static model. However, they are not sufficient in the context of a dynamic model because the optimal steady state turns out to be unstable. Based on a dynamic model, we need more coercive regulation policies.

  18. DELPHI AND VALUES,

    DTIC Science & Technology

    MANAGEMENT PLANNING AND CONTROL, DECISION MAKING), (* DECISION MAKING , GROUP DYNAMICS), (*GROUP DYNAMICS, ATTITUDES(PSYCHOLOGY)), REASONING, REACTION(PSYCHOLOGY), PUBLIC OPINION, PERFORMANCE(HUMAN), QUESTIONNAIRES, FEEDBACK

  19. Estimating European soil organic carbon mitigation potential in a global integrated land use model

    NASA Astrophysics Data System (ADS)

    Frank, Stefan; Böttcher, Hannes; Schneider, Uwe; Schmid, Erwin; Havlík, Petr

    2013-04-01

    Several studies have shown the dynamic interaction between soil organic carbon (SOC) sequestration rates, soil management decisions and SOC levels. Management practices such as reduced and no-tillage, improved residue management and crop rotations as well as the conversion of marginal cropland to native vegetation or conversion of cultivated land to permanent grassland offer the potential to increase SOC content. Even though dynamic interactions are widely acknowledged in literature, they have not been implemented in most existing land use decision models. A major obstacle is the high data and computing requirements for an explicit representation of alternative land use sequences since a model has to be able to track all different management decision paths. To our knowledge no study accounted so far for SOC dynamics explicitly in a global integrated land use model. To overcome these conceptual difficulties described above we apply an approach capable of accounting for SOC dynamics in GLOBIOM (Global Biosphere Management Model), a global recursive dynamic partial equilibrium bottom-up model integrating the agricultural, bioenergy and forestry sectors. GLOBIOM represents all major land based sectors and therefore is able to account for direct and indirect effects of land use change as well as leakage effects (e.g. through trade) implicitly. Together with the detailed representation of technologies (e.g. tillage and fertilizer management systems), these characteristics make the model a highly valuable tool for assessing European SOC emissions and mitigation potential. Demand and international trade are represented in this version of the model at the level of 27 EU member states and 23 aggregated world regions outside Europe. Changes in the demand on the one side, and profitability of the different land based activities on the other side, are the major determinants of land use change in GLOBIOM. In this paper we estimate SOC emissions from cropland for the EU until 2050 explicitly considering SOC dynamics due to land use and land management in a global integrated land use model. Moreover, we calculate the EU SOC mitigation potential taking into account leakage effects outside Europe as well as related feed backs from other sectors. In sensitivity analysis, we disaggregate the SOC mitigation potential i.e. we quantify the impact of different management systems and crop rotations to identify most promising mitigation strategies.

  20. Space station dynamics, attitude control and momentum management

    NASA Technical Reports Server (NTRS)

    Sunkel, John W.; Singh, Ramen P.; Vengopal, Ravi

    1989-01-01

    The Space Station Attitude Control System software test-bed provides a rigorous environment for the design, development and functional verification of GN and C algorithms and software. The approach taken for the simulation of the vehicle dynamics and environmental models using a computationally efficient algorithm is discussed. The simulation includes capabilities for docking/berthing dynamics, prescribed motion dynamics associated with the Mobile Remote Manipulator System (MRMS) and microgravity disturbances. The vehicle dynamics module interfaces with the test-bed through the central Communicator facility which is in turn driven by the Station Control Simulator (SCS) Executive. The Communicator addresses issues such as the interface between the discrete flight software and the continuous vehicle dynamics, and multi-programming aspects such as the complex flow of control in real-time programs. Combined with the flight software and redundancy management modules, the facility provides a flexible, user-oriented simulation platform.

  1. Forest fire management to avoid unintended consequences: a case study of Portugal using system dynamics.

    PubMed

    Collins, Ross D; de Neufville, Richard; Claro, João; Oliveira, Tiago; Pacheco, Abílio P

    2013-11-30

    Forest fires are a serious management challenge in many regions, complicating the appropriate allocation to suppression and prevention efforts. Using a System Dynamics (SD) model, this paper explores how interactions between physical and political systems in forest fire management impact the effectiveness of different allocations. A core issue is that apparently sound management can have unintended consequences. An instinctive management response to periods of worsening fire severity is to increase fire suppression capacity, an approach with immediate appeal as it directly treats the symptom of devastating fires and appeases the public. However, the SD analysis indicates that a policy emphasizing suppression can degrade the long-run effectiveness of forest fire management. By crowding out efforts to preventative fuel removal, it exacerbates fuel loads and leads to greater fires, which further balloon suppression budgets. The business management literature refers to this problem as the firefighting trap, wherein focus on fixing problems diverts attention from preventing them, and thus leads to inferior outcomes. The paper illustrates these phenomena through a case study of Portugal, showing that a balanced approach to suppression and prevention efforts can mitigate the self-reinforcing consequences of this trap, and better manage long-term fire damages. These insights can help policymakers and fire managers better appreciate the interconnected systems in which their authorities reside and the dynamics that may undermine seemingly rational management decisions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. An energy management for series hybrid electric vehicle using improved dynamic programming

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Yang, Yaoquan; Liu, Chunyu

    2018-02-01

    With the increasing numbers of hybrid electric vehicle (HEV), management for two energy sources, engine and battery, is more and more important to achieve the minimum fuel consumption. This paper introduces several working modes of series hybrid electric vehicle (SHEV) firstly and then describes the mathematical model of main relative components in SHEV. On the foundation of this model, dynamic programming is applied to distribute energy of engine and battery on the platform of matlab and acquires less fuel consumption compared with traditional control strategy. Besides, control rule recovering energy in brake profiles is added into dynamic programming, so shorter computing time is realized by improved dynamic programming and optimization on algorithm.

  3. Interventions and Interactions: Understanding Coupled Human-Water Dynamics for Improved Water Resources Management in the Himalayas

    NASA Astrophysics Data System (ADS)

    Crootof, A.

    2017-12-01

    Understanding coupled human-water dynamics offers valuable insights to address fundamental water resources challenges posed by environmental change. With hydropower reshaping human-water interactions in mountain river basins, there is a need for a socio-hydrology framework—which examines two-way feedback loops between human and water systems—to more effectively manage water resources. This paper explores the cross-scalar interactions and feedback loops between human and water systems in river basins affected by run-of-the-river hydropower and highlights the utility of a socio-hydrology perspectives to enhance water management in the face of environmental change. In the Himalayas, the rapid expansion of run-of-the-river hydropower—which diverts streamflow for energy generation—is reconfiguring the availability, location, and timing of water resources. This technological intervention in the river basin not only alters hydrologic dyanmics but also shapes social outcomes. Using hydropower development in the highlands of Uttarakhand, India as a case study, I first illustrate how run-of-the-river projects transform human-water dynamics by reshaping the social and physical landscape of a river basin. Second, I emphasize how examining cross-scalar feedbacks among structural dynamics, social outcomes, and values and norms in this coupled human-water system can inform water management. Third, I present hydrological and social literature, raised separately, to indicate collaborative research needs and knowledge gaps for coupled human-water systems affected by run-of-the-river hydropower. The results underscore the need to understand coupled human-water dynamics to improve water resources management in the face of environmental change.

  4. Simulation of Tasks Distribution in Horizontally Scalable Management System

    NASA Astrophysics Data System (ADS)

    Kustov, D.; Sherstneva, A.; Botygin, I.

    2016-08-01

    This paper presents an imitational model of the task distribution system for the components of territorially-distributed automated management system with a dynamically changing topology. Each resource of the distributed automated management system is represented with an agent, which allows to set behavior of every resource in the best possible way and ensure their interaction. The agent work load imitation was done via service query imitation formed in a system dynamics style using a stream diagram. The query generation took place in the abstract-represented center - afterwards, they were sent to the drive to be distributed to management system resources according to a ranking table.

  5. To Exist as a Case Manager Is to Constantly Change; to Be Successful, You Must Constantly Adapt.

    PubMed

    Tahan, Hussein M

    Change is inevitable whether in personal or professional lives. Case management practice is always evolving on the basis of the dynamic nature of the U.S. health care environment. Effective case managers are those who possess an adaptive mind-set, recognize the importance to change to maintain success, and remain relevant. They also demonstrate a sense of accountability and responsibility for own learning, professional development, and acquisition of new skills and knowledge. This editorial discusses the nature of change and adaptation and presents key strategies for case managers to remain relevant and effective in dynamic practice environments.

  6. Identifying behaviour patterns of construction safety using system archetypes.

    PubMed

    Guo, Brian H W; Yiu, Tak Wing; González, Vicente A

    2015-07-01

    Construction safety management involves complex issues (e.g., different trades, multi-organizational project structure, constantly changing work environment, and transient workforce). Systems thinking is widely considered as an effective approach to understanding and managing the complexity. This paper aims to better understand dynamic complexity of construction safety management by exploring archetypes of construction safety. To achieve this, this paper adopted the ground theory method (GTM) and 22 interviews were conducted with participants in various positions (government safety inspector, client, health and safety manager, safety consultant, safety auditor, and safety researcher). Eight archetypes were emerged from the collected data: (1) safety regulations, (2) incentive programs, (3) procurement and safety, (4) safety management in small businesses (5) production and safety, (6) workers' conflicting goals, (7) blame on workers, and (8) reactive and proactive learning. These archetypes capture the interactions between a wide range of factors within various hierarchical levels and subsystems. As a free-standing tool, they advance the understanding of dynamic complexity of construction safety management and provide systemic insights into dealing with the complexity. They also can facilitate system dynamics modelling of construction safety process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. An Evaluation of the Applicability of Damage Tolerance to Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Le, Dy; Turnberg, Jay

    2005-01-01

    The Federal Aviation Administration, the National Aeronautics and Space Administration and the aircraft industry have teamed together to develop methods and guidance for the safe life-cycle management of dynamic systems. Based on the success of the United States Air Force damage tolerance initiative for airframe structure, a crack growth based damage tolerance approach is being examined for implementation into the design and management of dynamic systems. However, dynamic systems accumulate millions of vibratory cycles per flight hour, more than 12,000 times faster than an airframe system. If a detectable crack develops in a dynamic system, the time to failure is extremely short, less than 100 flight hours in most cases, leaving little room for error in the material characterization, life cycle analysis, nondestructive inspection and maintenance processes. In this paper, the authors review the damage tolerant design process focusing on uncertainties that affect dynamic systems and evaluate the applicability of damage tolerance on dynamic systems.

  8. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - calibration Report for Phoenix Testbed : Final Report. [supporting datasets - Phoenix Testbed

    DOT National Transportation Integrated Search

    2017-07-26

    The datasets in this zip file are in support of FHWA-JPO-16-379, Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Program...

  9. Information Dynamics as Foundation for Network Management

    DTIC Science & Technology

    2014-12-04

    developed to adapt to channel dynamics in a mobile network environment. We devise a low- complexity online scheduling algorithm integrated with the...has been accepted for the Journal on Network and Systems Management in 2014. - RINC programmable platform for Infrastructure -as-a-Service public... backend servers. Rather than implementing load balancing in dedicated appliances, commodity SDN switches can perform this function. We design

  10. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - San Mateo Testbed Analysis Plan [supporting datasets - San Mateo Testbed

    DOT National Transportation Integrated Search

    2017-06-26

    This zip file contains files of data to support FHWA-JPO-16-370, Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Program...

  11. An ecoinformatics application for forest dynamics plot data management and sharing

    Treesearch

    Chau-Chin Lin; Abd Rahman Kassim; Kristin Vanderbilt; Donald Henshaw; Eda C. Melendez-Colom; John H. Porter; Kaoru Niiyama; Tsutomu Yagihashi; Sek Aun Tan; Sheng-Shan Lu; Chi-Wen Hsiao; Li-Wan Chang; Meei-Ru Jeng

    2011-01-01

    Several forest dynamics plot research projects in the East-Asia Pacific region of the International Long-Term Ecological Research network actively collect long-term data, and some of these large plots are members of the Center for Tropical Forest Science network. The wealth of forest plot data presents challenges in information management to researchers. In order to...

  12. Spatial analysis of longleaf pine stand dynamics after 60 years of management

    Treesearch

    John C. Gilbert; John S. Kush; Rebecca J. Barlow

    2012-01-01

    There are still many questions and misconceptions about the stand dynamics of naturally-regenerated longleaf pine (Pinus palustris Mill.). Since 1948, the “Farm Forty,” a forty-acre tract located on the USDA Forest Service Escambia Experimental Forest near Brewton, Alabama, has been managed to create high quality wood products, to successfully...

  13. Endangered Butterflies as a Model System for Managing Source Sink Dynamics on Department of Defense Lands

    DTIC Science & Technology

    patches to cycle from sink to source status and back.Objective: Through a combination of field studies and state-of-the-art quantitative models, we...landscapes with dynamic changes in habitat quality due to management. We also validated our general approach by comparing patterns in our focal species to general, cross-taxa, patterns.

  14. A Computerized Asthma Outcomes Measure Is Feasible for Disease Management

    PubMed Central

    Turner-Bowker, Diane M.; Saris-Baglama, Renee N.; Anatchkova, Milena; Mosen, David M.

    2010-01-01

    Objective To develop and test an online assessment referred to as the ASTHMA-CAT (computerized adaptive testing), a patient-based asthma impact, control, and generic health-related quality of life (HRQOL) measure. Study Design Cross-sectional pilot study of the ASTHMA-CAT’s administrative feasibility in a disease management population. Methods The ASTHMA-CAT included a dynamic or static Asthma Impact Survey (AIS), Asthma Control Test, and SF-8 Health Survey. A sample of clinician-diagnosed adult asthmatic patients (N = 114) completed the ASTHMA-CAT. Results were used to evaluate administrative feasibility of the instrument and psychometric performance of the dynamic AIS relative to the static AIS. A prototype aggregate (group-level) report was developed and reviewed by care providers. Results Online administration of the ASTHMA-CAT was feasible for patients in disease management. The dynamic AIS functioned well compared with the static AIS in preliminary studies evaluating response burden, precision, and validity. Providers found reports to be relevant, useful, and applicable for care management. Conclusion The ASTHMA-CAT may facilitate asthma care management. PMID:20852675

  15. A spatial-temporal system for dynamic cadastral management.

    PubMed

    Nan, Liu; Renyi, Liu; Guangliang, Zhu; Jiong, Xie

    2006-03-01

    A practical spatio-temporal database (STDB) technique for dynamic urban land management is presented. One of the STDB models, the expanded model of Base State with Amendments (BSA), is selected as the basis for developing the dynamic cadastral management technique. Two approaches, the Section Fast Indexing (SFI) and the Storage Factors of Variable Granularity (SFVG), are used to improve the efficiency of the BSA model. Both spatial graphic data and attribute data, through a succinct engine, are stored in standard relational database management systems (RDBMS) for the actual implementation of the BSA model. The spatio-temporal database is divided into three interdependent sub-databases: present DB, history DB and the procedures-tracing DB. The efficiency of database operation is improved by the database connection in the bottom layer of the Microsoft SQL Server. The spatio-temporal system can be provided at a low-cost while satisfying the basic needs of urban land management in China. The approaches presented in this paper may also be of significance to countries where land patterns change frequently or to agencies where financial resources are limited.

  16. System dynamics model of taxi management in metropolises: Economic and environmental implications for Beijing.

    PubMed

    Wang, Hao; Zhang, Kai; Chen, Junhua; Wang, Zhifeng; Li, Guijun; Yang, Yuqi

    2018-05-01

    Taxis are an important component of urban passenger transport. Research on the daily dispatching of taxis and the utility of governmental management is important for the improvement of passenger travel, taxi driver income and environmental impacts. However, urban taxi management is a complex and dynamic system that is affected by many factors, and positive/negative feedback relationships and nonlinear interactions exist between each subsystem and variable. Therefore, conventional research methods can hardly depict its characteristics comprehensively. To bridge this gap, this paper develops a system dynamics model of urban taxi management, in which the empty-loaded rate and total demand are selected as key factors affecting taxi dispatching, and the impacts of taxi fares on driver income and travel demand are taken into account. After the validation of the model, taxi operations data derived from a prior analysis of origin-destination data of Beijing taxis are used as input for the model to simulate the taxi market in Beijing. Finally, economic and environmental implications are provided for the government to optimise policies on taxi management. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Vegetation management and protection research: Disturbance processes and ecosystem management

    Treesearch

    Robert D. Averill; Louise Larson; Jim Saveland; Philip Wargo; Jerry Williams; Melvin Bellinger

    1994-01-01

    This paper is intended to broaden awareness and help develop consensus among USDA Forest Service scientists and resource managers about the role and significance of disturbance in ecosystem dynamics and, hence, resource management. To have an effective ecosystem management policy, resource managers and the public must understand the nature of ecological resiliency and...

  18. A self-cognizant dynamic system approach for prognostics and health management

    NASA Astrophysics Data System (ADS)

    Bai, Guangxing; Wang, Pingfeng; Hu, Chao

    2015-03-01

    Prognostics and health management (PHM) is an emerging engineering discipline that diagnoses and predicts how and when a system will degrade its performance and lose its partial or whole functionality. Due to the complexity and invisibility of rules and states of most dynamic systems, developing an effective approach to track evolving system states becomes a major challenge. This paper presents a new self-cognizant dynamic system (SCDS) approach that incorporates artificial intelligence into dynamic system modeling for PHM. A feed-forward neural network (FFNN) is selected to approximate a complex system response which is challenging task in general due to inaccessible system physics. The trained FFNN model is then embedded into a dual extended Kalman filter algorithm to track down system dynamics. A recursive computation technique used to update the FFNN model using online measurements is also derived. To validate the proposed SCDS approach, a battery dynamic system is considered as an experimental application. After modeling the battery system by a FFNN model and a state-space model, the state-of-charge (SoC) and state-of-health (SoH) are estimated by updating the FFNN model using the proposed approach. Experimental results suggest that the proposed approach improves the efficiency and accuracy for battery health management.

  19. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  20. Capacity planning for waste management systems: an interval fuzzy robust dynamic programming approach.

    PubMed

    Nie, Xianghui; Huang, Guo H; Li, Yongping

    2009-11-01

    This study integrates the concepts of interval numbers and fuzzy sets into optimization analysis by dynamic programming as a means of accounting for system uncertainty. The developed interval fuzzy robust dynamic programming (IFRDP) model improves upon previous interval dynamic programming methods. It allows highly uncertain information to be effectively communicated into the optimization process through introducing the concept of fuzzy boundary interval and providing an interval-parameter fuzzy robust programming method for an embedded linear programming problem. Consequently, robustness of the optimization process and solution can be enhanced. The modeling approach is applied to a hypothetical problem for the planning of waste-flow allocation and treatment/disposal facility expansion within a municipal solid waste (MSW) management system. Interval solutions for capacity expansion of waste management facilities and relevant waste-flow allocation are generated and interpreted to provide useful decision alternatives. The results indicate that robust and useful solutions can be obtained, and the proposed IFRDP approach is applicable to practical problems that are associated with highly complex and uncertain information.

  1. Information-based management mode based on value network analysis for livestock enterprises

    NASA Astrophysics Data System (ADS)

    Liu, Haoqi; Lee, Changhoon; Han, Mingming; Su, Zhongbin; Padigala, Varshinee Anu; Shen, Weizheng

    2018-01-01

    With the development of computer and IT technologies, enterprise management has gradually become information-based management. Moreover, due to poor technical competence and non-uniform management, most breeding enterprises show a lack of organisation in data collection and management. In addition, low levels of efficiency result in increasing production costs. This paper adopts 'struts2' in order to construct an information-based management system for standardised and normalised management within the process of production in beef cattle breeding enterprises. We present a radio-frequency identification system by studying multiple-tag anti-collision via a dynamic grouping ALOHA algorithm. This algorithm is based on the existing ALOHA algorithm and uses an improved packet dynamic of this algorithm, which is characterised by a high-throughput rate. This new algorithm can reach a throughput 42% higher than that of the general ALOHA algorithm. With a change in the number of tags, the system throughput is relatively stable.

  2. The Seasonal Dynamics of Artificial Nest Predation Rates along Edges in a Mosaic Managed Reedbed

    PubMed Central

    Malzer, Iain; Helm, Barbara

    2015-01-01

    Boundaries between different habitats can be responsible for changes in species interactions, including modified rates of encounter between predators and prey. Such ‘edge effects’ have been reported in nesting birds, where nest predation rates can be increased at habitat edges. The literature concerning edge effects on nest predation rates reveals a wide variation in results, even within single habitats, suggesting edge effects are not fixed, but dynamic throughout space and time. This study demonstrates the importance of considering dynamic mechanisms underlying edge effects and their relevance when undertaking habitat management. In reedbed habitats, management in the form of mosaic winter reed cutting can create extensive edges which change rapidly with reed regrowth during spring. We investigate the seasonal dynamics of reedbed edges using an artificial nest experiment based on the breeding biology of a reedbed specialist. We first demonstrate that nest predation decreases with increasing distance from the edge of cut reed blocks, suggesting edge effects have a pivotal role in this system. Using repeats throughout the breeding season we then confirm that nest predation rates are temporally dynamic and decline with the regrowth of reed. However, effects of edges on nest predation were consistent throughout the season. These results are of practical importance when considering appropriate habitat management, suggesting that reed cutting may heighten nest predation, especially before new growth matures. They also contribute directly to an overall understanding of the dynamic processes underlying edge effects and their potential role as drivers of time-dependent habitat use. PMID:26448338

  3. The effects of harvest on waterfowl populations

    USGS Publications Warehouse

    Cooch, Evan G.; Guillemain, Matthieu; Boomer, G Scott; Lebreton, Jean-Dominique; Nichols, James D.

    2014-01-01

    Overall, there is substantial uncertainty about system dynamics, about the impacts of potential management and conservation decisions on those dynamics, and how to optimise management decisions in the presence of such uncertainties. Such relationships are unlikely to be stationary over space or time, and selective harvest of some individuals can potentially alter life history allocation of resources over time – both of which will potentially influence optimal harvest strategies. These sources of variation and uncertainty argue for the use of adaptive approaches to waterfowl harvest management.

  4. Multiagent Systems Based Modeling and Implementation of Dynamic Energy Management of Smart Microgrid Using MACSimJX.

    PubMed

    Raju, Leo; Milton, R S; Mahadevan, Senthilkumaran

    The objective of this paper is implementation of multiagent system (MAS) for the advanced distributed energy management and demand side management of a solar microgrid. Initially, Java agent development environment (JADE) frame work is used to implement MAS based dynamic energy management of solar microgrid. Due to unstable nature of MATLAB, when dealing with multithreading environment, MAS operating in JADE is linked with the MATLAB using a middle ware called Multiagent Control Using Simulink with Jade Extension (MACSimJX). MACSimJX allows the solar microgrid components designed with MATLAB to be controlled by the corresponding agents of MAS. The microgrid environment variables are captured through sensors and given to agents through MATLAB/Simulink and after the agent operations in JADE, the results are given to the actuators through MATLAB for the implementation of dynamic operation in solar microgrid. MAS operating in JADE maximizes operational efficiency of solar microgrid by decentralized approach and increase in runtime efficiency due to JADE. Autonomous demand side management is implemented for optimizing the power exchange between main grid and microgrid with intermittent nature of solar power, randomness of load, and variation of noncritical load and grid price. These dynamics are considered for every time step and complex environment simulation is designed to emulate the distributed microgrid operations and evaluate the impact of agent operations.

  5. Multiagent Systems Based Modeling and Implementation of Dynamic Energy Management of Smart Microgrid Using MACSimJX

    PubMed Central

    Raju, Leo; Milton, R. S.; Mahadevan, Senthilkumaran

    2016-01-01

    The objective of this paper is implementation of multiagent system (MAS) for the advanced distributed energy management and demand side management of a solar microgrid. Initially, Java agent development environment (JADE) frame work is used to implement MAS based dynamic energy management of solar microgrid. Due to unstable nature of MATLAB, when dealing with multithreading environment, MAS operating in JADE is linked with the MATLAB using a middle ware called Multiagent Control Using Simulink with Jade Extension (MACSimJX). MACSimJX allows the solar microgrid components designed with MATLAB to be controlled by the corresponding agents of MAS. The microgrid environment variables are captured through sensors and given to agents through MATLAB/Simulink and after the agent operations in JADE, the results are given to the actuators through MATLAB for the implementation of dynamic operation in solar microgrid. MAS operating in JADE maximizes operational efficiency of solar microgrid by decentralized approach and increase in runtime efficiency due to JADE. Autonomous demand side management is implemented for optimizing the power exchange between main grid and microgrid with intermittent nature of solar power, randomness of load, and variation of noncritical load and grid price. These dynamics are considered for every time step and complex environment simulation is designed to emulate the distributed microgrid operations and evaluate the impact of agent operations. PMID:27127802

  6. Disturbance processes and ecosystem management

    Treesearch

    Robert D. Averill; Louise Larson; Jim Saveland; Philip Wargo; Jerry Williams; Melvin Bellinger

    1994-01-01

    This paper is intended to broaden awareness and help develop consensus among USDA Forest Service scientists and resource managers about the role and significance of disturbance in ecosystem dynamics and, hence, resource management. To have an effective ecosystem management policy, resource managers and the public must understand the nature of ecological resiliency and...

  7. Dynamic and accretive composition of patient engagement instruments for personalized plan generation.

    PubMed

    Hsueh, Pei-Yun S; Zhu, Xinxin; Deng, Vincent; Ramarishnan, Sreeram; Ball, Marion

    2014-01-01

    Patient engagement is important to help patients become more informed and active in managing their health. Effective patient engagement demands short, yet valid instruments for measuring self-efficacy in various care dimensions. However, the static instruments are often too lengthy to be effective for assessment purposes. Furthermore, these tests could neither account for the dynamicity of measurements over time, nor differentiate care dimensions that are more critical to certain sub-populations. To remedy these disadvantages, we devise a dynamic instrument composition approach that can model the measurement of patient self-efficacy over time and iteratively select critical care dimensions and appropriate assessment questions based on dynamic user categorization. The dynamically composed instruments are expected to guide patients through self-management reinforcement cycles within or across care dimensions, while tightly integrated into clinical workflow and standard care processes.

  8. CARES/PC - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES

    NASA Technical Reports Server (NTRS)

    Szatmary, S. A.

    1994-01-01

    The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES/PC performs statistical analysis of data obtained from the fracture of simple, uniaxial tensile or flexural specimens and estimates the Weibull and Batdorf material parameters from this data. CARES/PC is a subset of the program CARES (COSMIC program number LEW-15168) which calculates the fast-fracture reliability or failure probability of ceramic components utilizing the Batdorf and Weibull models to describe the effects of multi-axial stress states on material strength. CARES additionally requires that the ceramic structure be modeled by a finite element program such as MSC/NASTRAN or ANSYS. The more limited CARES/PC does not perform fast-fracture reliability estimation of components. CARES/PC estimates ceramic material properties from uniaxial tensile or from three- and four-point bend bar data. In general, the parameters are obtained from the fracture stresses of many specimens (30 or more are recommended) whose geometry and loading configurations are held constant. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests measure the accuracy of the hypothesis that the fracture data comes from a population with a distribution specified by the estimated Weibull parameters. Ninety-percent confidence intervals on the Weibull parameters and the unbiased value of the shape parameter for complete samples are provided when the maximum likelihood technique is used. CARES/PC is written and compiled with the Microsoft FORTRAN v5.0 compiler using the VAX FORTRAN extensions and dynamic array allocation supported by this compiler for the IBM/MS-DOS or OS/2 operating systems. The dynamic array allocation routines allow the user to match the number of fracture sets and test specimens to the memory available. Machine requirements include IBM PC compatibles with optional math coprocessor. Program output is designed to fit 80-column format printers. Executables for both DOS and OS/2 are provided. CARES/PC is distributed on one 5.25 inch 360K MS-DOS format diskette in compressed format. The expansion tool PKUNZIP.EXE is supplied on the diskette. CARES/PC was developed in 1990. IBM PC and OS/2 are trademarks of International Business Machines. MS-DOS and MS OS/2 are trademarks of Microsoft Corporation. VAX is a trademark of Digital Equipment Corporation.

  9. [Application of information management system about medical equipment].

    PubMed

    Hang, Jianjin; Zhang, Chaoqun; Wu, Xiang-Yang

    2011-05-01

    Based on the practice of workflow, information management system about medical equipment was developed and its functions such as gathering, browsing, inquiring and counting were introduced. With dynamic and complete case management of medical equipment, the system improved the management of medical equipment.

  10. Short-term effects of reduced-impact logging on Copaifera spp. (Fabaceae) regeneration in eastern Amazon

    Treesearch

    Carine Klauberg; Edson Vidal; Carlos Alberto Silva; Andrew Thomas Hudak; Manuela Oliveira; Pedro Higuchi

    2017-01-01

    Timber management directly influences the population dynamics of tree species, like Copaifera spp. (copaíba), which provide oil-resin with ecological and economic importance. The aim of this study was to evaluate the structure and population dynamics of Copaifera in unmanaged and managed stands by reduced-impact logging (RIL) in eastern Amazon in Pará state, Brazil....

  11. Production dynamics of intensively managed loblolly pine stands in the southern United States: a synthesis of seven long-term experiments

    Treesearch

    Eric J. Jokela; Philip M. Dougherty; Timothy A. Martin

    2004-01-01

    Results from seven long-term experiments in the southern US were summarized to understand production dynamics of intensively managed loblolly pine plantations. Replicated studies that spanned a wide range of soil and climatic conditions were established (North Carolina-NC; Georgia-GA (three sites); Florida-F%; Louisiana-LA; Oklahoma--OK). A11 experiments received some...

  12. An Approximate Dynamic Programming Mode for Optimal MEDEVAC Dispatching

    DTIC Science & Technology

    2015-03-26

    over the myopic policy. This indicates the ADP policy is efficiently managing resources by 28 not immediately sending the nearest available MEDEVAC...DISPATCHING THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology...medical evacuation (MEDEVAC) dispatch policies. To solve the MDP, we apply an ap- proximate dynamic programming (ADP) technique. The problem of deciding

  13. Multi-period response management to contaminated water distribution networks: dynamic programming versus genetic algorithms

    NASA Astrophysics Data System (ADS)

    Bashi-Azghadi, Seyyed Nasser; Afshar, Abbas; Afshar, Mohammad Hadi

    2018-03-01

    Previous studies on consequence management assume that the selected response action including valve closure and/or hydrant opening remains unchanged during the entire management period. This study presents a new embedded simulation-optimization methodology for deriving time-varying operational response actions in which the network topology may change from one stage to another. Dynamic programming (DP) and genetic algorithm (GA) are used in order to minimize selected objective functions. Two networks of small and large sizes are used in order to illustrate the performance of the proposed modelling schemes if a time-dependent consequence management strategy is to be implemented. The results show that for a small number of decision variables even in large-scale networks, DP is superior in terms of accuracy and computer runtime. However, as the number of potential actions grows, DP loses its merit over the GA approach. This study clearly proves the priority of the proposed dynamic operation strategy over the commonly used static strategy.

  14. Nitrogen dynamics in managed boreal forests: Recent advances and future research directions.

    PubMed

    Sponseller, Ryan A; Gundale, Michael J; Futter, Martyn; Ring, Eva; Nordin, Annika; Näsholm, Torgny; Laudon, Hjalmar

    2016-02-01

    Nitrogen (N) availability plays multiple roles in the boreal landscape, as a limiting nutrient to forest growth, determinant of terrestrial biodiversity, and agent of eutrophication in aquatic ecosystems. We review existing research on forest N dynamics in northern landscapes and address the effects of management and environmental change on internal cycling and export. Current research foci include resolving the nutritional importance of different N forms to trees and establishing how tree-mycorrhizal relationships influence N limitation. In addition, understanding how forest responses to external N inputs are mediated by above- and belowground ecosystem compartments remains an important challenge. Finally, forestry generates a mosaic of successional patches in managed forest landscapes, with differing levels of N input, biological demand, and hydrological loss. The balance among these processes influences the temporal patterns of stream water chemistry and the long-term viability of forest growth. Ultimately, managing forests to keep pace with increasing demands for biomass production, while minimizing environmental degradation, will require multi-scale and interdisciplinary perspectives on landscape N dynamics.

  15. Dynamics of Driver Distraction: The process of engaging and disengaging

    PubMed Central

    Lee, John D.

    2014-01-01

    Driver distraction research has a long history, spanning nearly 50 years, but intensifying over the last decade. The dominant paradigm guiding this research defines distraction in terms of excessive workload and limited attentional resources. This approach largely ignores how drivers come to engage in these tasks and under what conditions they engage and disengage from driving—the dynamics of distraction. The dynamics of distraction identifies breakdowns of interruption management as an important contributor to distraction, leading to describe distraction in terms of failures of task timing, switching, and prioritization. The dynamics of distraction also identifies disengagement in driving (e.g., mind wandering) as a substantial challenge that secondary tasks might exacerbate or mitigate. Increasing vehicle automation accentuates the need to consider these dynamics of distraction. Automation offers drivers more opportunity to engage in distractions and disengage from driving, and can surprise drivers by unexpectedly requiring drivers to quickly re-engage in driving—placing greater importance of interruption management expertise. This review describes distraction in terms of breakdowns in interruption management and problems of engagement, and summarizes how contingency, conditioning, and consequence traps lead to problems of engaging and disengaging in driving and distractions. PMID:24776224

  16. Designing an architectural style for dynamic medical Cross-Organizational Workflow management system: an approach based on agents and web services.

    PubMed

    Bouzguenda, Lotfi; Turki, Manel

    2014-04-01

    This paper shows how the combined use of agent and web services technologies can help to design an architectural style for dynamic medical Cross-Organizational Workflow (COW) management system. Medical COW aims at supporting the collaboration between several autonomous and possibly heterogeneous medical processes, distributed over different organizations (Hospitals, Clinic or laboratories). Dynamic medical COW refers to occasional cooperation between these health organizations, free of structural constraints, where the medical partners involved and their number are not pre-defined. More precisely, this paper proposes a new architecture style based on agents and web services technologies to deal with two key coordination issues of dynamic COW: medical partners finding and negotiation between them. It also proposes how the proposed architecture for dynamic medical COW management system can connect to a multi-agent system coupling the Clinical Decision Support System (CDSS) with Computerized Prescriber Order Entry (CPOE). The idea is to assist the health professionals such as doctors, nurses and pharmacists with decision making tasks, as determining diagnosis or patient data analysis without stopping their clinical processes in order to act in a coherent way and to give care to the patient.

  17. 4D Dynamic Required Navigation Performance Final Report

    NASA Technical Reports Server (NTRS)

    Finkelsztein, Daniel M.; Sturdy, James L.; Alaverdi, Omeed; Hochwarth, Joachim K.

    2011-01-01

    New advanced four dimensional trajectory (4DT) procedures under consideration for the Next Generation Air Transportation System (NextGen) require an aircraft to precisely navigate relative to a moving reference such as another aircraft. Examples are Self-Separation for enroute operations and Interval Management for in-trail and merging operations. The current construct of Required Navigation Performance (RNP), defined for fixed-reference-frame navigation, is not sufficiently specified to be applicable to defining performance levels of such air-to-air procedures. An extension of RNP to air-to-air navigation would enable these advanced procedures to be implemented with a specified level of performance. The objective of this research effort was to propose new 4D Dynamic RNP constructs that account for the dynamic spatial and temporal nature of Interval Management and Self-Separation, develop mathematical models of the Dynamic RNP constructs, "Required Self-Separation Performance" and "Required Interval Management Performance," and to analyze the performance characteristics of these air-to-air procedures using the newly developed models. This final report summarizes the activities led by Raytheon, in collaboration with GE Aviation and SAIC, and presents the results from this research effort to expand the RNP concept to a dynamic 4D frame of reference.

  18. SPATIAL SCALE OF AUTOCORRELATION IN WISCONSIN FROG AND TOAD SURVEY DATA

    EPA Science Inventory

    The degree to which local population dynamics are correlated with nearby sites has important implications for metapopulation dynamics and landscape management. Spatially extensive monitoring data can be used to evaluate large-scale population dynamic processes. Our goals in this ...

  19. Complexities of Organization Dynamics and Development: Leaders and Managers

    ERIC Educational Resources Information Center

    Nderu-Boddington, Eulalee

    2008-01-01

    This article shows the theoretical framework for understanding organizational dynamics and development - the change theory and subordinate relationships within contemporary organizations. The emphasis is on power strategies and the relationship to organizational dynamics and development. The integrative process broadens the understanding of…

  20. [Development of Monitoring System for Infant Incubator Based on IOT Technology].

    PubMed

    Wang, Wenfeng; Peng, Dunlu; Gu, Nan

    2017-05-30

    IOT(Internet of things) is a relatively new technology, more and more integrated into our lives. In this paper we use infant incubator for example, introduce the application of IOT technology to reduce the risk of the use of medical devices, and through the dynamic management to improve the management level and efficiency. Put forward a method of medical equipment linked. Combined with the point of IOT technology and sensor technology, we find out the actual needs of the management and use of infant incubator. For the dynamic management of medical equipment, we use sensors to control risk points. The system meets the needs of the hospital and patients in many areas.

  1. Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants. Report 2. First Year Poststocking Results. Volume IV. Nitrogen and Phosphorus Dynamics of the Lake Conway Ecosystem: Loading Budgets and a Dynamic Hydrologic Phosphorus Model.

    DTIC Science & Technology

    1982-08-01

    AD-AIA 700 FLORIDA UN1V GAINESVILLE DEPT OF ENVIRONMENTAL ENGIN -ETC F/G 6/6 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMOR--ENL...Conway ecosystem and is part of the Large- Scale Operations Management Test (LSOMT) of the Aquatic Plant Control Research Program (APCRP) at the WES...should be cited as follows: Blancher, E. C., II, and Fellows, C. R. 1982. "Large-Scale Operations Management Test of Use of the White Amur for Control

  2. Developing Knowledge and Value in Management Consulting. Research in Management Consulting.

    ERIC Educational Resources Information Center

    Buono, Anthony F., Ed.

    This document contains 11 papers that explore knowledge and value development in the field of management consulting, with particular emphasis on trends and techniques in the practice of management consulting and the current theory and dynamics of management consulting. The following papers are included: "Introduction" (Anthony F. Buono);…

  3. From terrestrial to aquatic fluxes: Integrating stream dynamics within a dynamic global vegetation modeling framework

    NASA Astrophysics Data System (ADS)

    Hoy, Jerad; Poulter, Benjamin; Emmett, Kristen; Cross, Molly; Al-Chokhachy, Robert; Maneta, Marco

    2016-04-01

    Integrated terrestrial ecosystem models simulate the dynamics and feedbacks between climate, vegetation, disturbance, and hydrology and are used to better understand biogeography and biogeochemical cycles. Extending dynamic vegetation models to the aquatic interface requires coupling surface and sub-surface runoff to catchment routing schemes and has the potential to enhance how researchers and managers investigate how changes in the environment might impact the availability of water resources for human and natural systems. In an effort towards creating such a coupled model, we developed catchment-based hydrologic routing and stream temperature model to pair with LPJ-GUESS, a dynamic global vegetation model. LPJ-GUESS simulates detailed stand-level vegetation dynamics such as growth, carbon allocation, and mortality, as well as various physical and hydrologic processes such as canopy interception and through-fall, and can be applied at small spatial scales, i.e., 1 km. We demonstrate how the coupled model can be used to investigate the effects of transient vegetation dynamics and CO2 on seasonal and annual stream discharge and temperature regimes. As a direct management application, we extend the modeling framework to predict habitat suitability for fish habitat within the Greater Yellowstone Ecosystem, a 200,000 km2 region that provides critical habitat for a range of aquatic species. The model is used to evaluate, quantitatively, the effects of management practices aimed to enhance hydrologic resilience to climate change, and benefits for water storage and fish habitat in the coming century.

  4. Design and implementation of the flight dynamics system for COMS satellite mission operations

    NASA Astrophysics Data System (ADS)

    Lee, Byoung-Sun; Hwang, Yoola; Kim, Hae-Yeon; Kim, Jaehoon

    2011-04-01

    The first Korean multi-mission geostationary Earth orbit satellite, Communications, Ocean, and Meteorological Satellite (COMS) was launched by an Ariane 5 launch vehicle in June 26, 2010. The COMS satellite has three payloads including Ka-band communications, Geostationary Ocean Color Imager, and Meteorological Imager. Although the COMS spacecraft bus is based on the Astrium Eurostar 3000 series, it has only one solar array to the south panel because all of the imaging sensors are located on the north panel. In order to maintain the spacecraft attitude with 5 wheels and 7 thrusters, COMS should perform twice a day wheel off-loading thruster firing operations, which affect on the satellite orbit. COMS flight dynamics system provides the general on-station functions such as orbit determination, orbit prediction, event prediction, station-keeping maneuver planning, station-relocation maneuver planning, and fuel accounting. All orbit related functions in flight dynamics system consider the orbital perturbations due to wheel off-loading operations. There are some specific flight dynamics functions to operate the spacecraft bus such as wheel off-loading management, oscillator updating management, and on-station attitude reacquisition management. In this paper, the design and implementation of the COMS flight dynamics system is presented. An object oriented analysis and design methodology is applied to the flight dynamics system design. Programming language C# within Microsoft .NET framework is used for the implementation of COMS flight dynamics system on Windows based personal computer.

  5. An Evaluation of Controller and Pilot Performance, Workload and Acceptability under a NextGen Concept for Dynamic Weather Adapted Arrival Routing

    NASA Technical Reports Server (NTRS)

    Johnson, Walter W.; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Battiste, Vernol

    2012-01-01

    In todays terminal operations, controller workload increases and throughput decreases when fixed standard terminal arrival routes (STARs) are impacted by storms. To circumvent this operational constraint, Prete, Krozel, Mitchell, Kim and Zou (2008) proposed to use automation to dynamically adapt arrival and departure routing based on weather predictions. The present study examined this proposal in the context of a NextGen trajectory-based operation concept, focusing on the acceptability and its effect on the controllers ability to manage traffic flows. Six controllers and twelve transport pilots participated in a human-in-the-loop simulation of arrival operations into Louisville International Airport with interval management requirements. Three types of routing structures were used: Static STARs (similar to current routing, which require the trajectories of individual aircraft to be modified to avoid the weather), Dynamic routing (automated adaptive routing around weather), and Dynamic Adjusted routing (automated adaptive routing around weather with aircraft entry time adjusted to account for differences in route length). Spacing Responsibility, whether responsibility for interval management resided with the controllers (as today), or resided with the pilot (who used a flight deck based automated spacing algorithm), was also manipulated. Dynamic routing as a whole was rated superior to static routing, especially by pilots, both in terms of workload reduction and flight path safety. A downside of using dynamic routing was that the paths flown in the dynamic conditions tended to be somewhat longer than the paths flown in the static condition.

  6. Freshwater for resilience: a shift in thinking.

    PubMed Central

    Folke, Carl

    2003-01-01

    Humanity shapes freshwater flows and biosphere dynamics from a local to a global scale. Successful management of target resources in the short term tends to alienate the social and economic development process from its ultimate dependence on the life-supporting environment. Freshwater becomes transformed into a resource for optimal management in development, neglecting the multiple functions of freshwater in dynamic landscapes and its fundamental role as the bloodstream of the biosphere. The current tension of these differences in worldview is exemplified through the recent development of modern aquaculture contrasted with examples of catchment-based stewardship of freshwater flows in dynamic landscapes. In particular, the social and institutional dimension of catchment management is highlighted and features of social-ecological systems for resilience building are presented. It is concluded that this broader view of freshwater provides the foundation for hydrosolidarity. PMID:14728796

  7. From dynamic ocean management to climate-ready management: a case study using blue whales in the northeast Pacific.

    NASA Astrophysics Data System (ADS)

    Hazen, E. L.

    2016-02-01

    Highly migratory species regularly traverse human-imposed boundaries including exclusive economic zones and marine protected areas, thus are difficult to manage using traditional spatial approaches. Blue whales (Balaenoptera musculus) are seasonal visitors to the California Current System that target a single prey resource, krill (Euphausia pacifica, Thysanoessa spinifera), and migrate large distances to find and exploit ephemeral prey patches. Successful management of blue whales requires improved understanding of how fine-scale foraging ecology translates to population abundances. Specifically, sub-lethal factors such as anthropogenic noise and climate change, and lethal factors such as ship strikes may be limiting recovery and can be difficult to account for in current management strategies. Here we use an extensive dataset of fine-scale accelerometers (55) and broad-scale satellite tags (104) deployed on Northeast Pacific blue whales to examine the energetics of foraging, overlap with human risk, and projections of future habitat with climate change. We quantify the importance of dense prey patches (> 100 krill per cubic meter) for blue whale energetics and fitness. Distribution models can be used in concert with industry and regional offices to produce dynamic rules to reduce vessel interactions. We propose telemetry data are ripe for use in establishing dynamic management approaches that account for daily to seasonal management areas to minimize anthropogenic risks, and are also adaptable to long-term climate-driven changes in habitat.

  8. From dynamic ocean management to climate-ready management: a case study using blue whales in the northeast Pacific.

    NASA Astrophysics Data System (ADS)

    Hazen, E. L.

    2016-12-01

    Highly migratory species regularly traverse human-imposed boundaries including exclusive economic zones and marine protected areas, thus are difficult to manage using traditional spatial approaches. Blue whales (Balaenoptera musculus) are seasonal visitors to the California Current System that target a single prey resource, krill (Euphausia pacifica, Thysanoessa spinifera), and migrate large distances to find and exploit ephemeral prey patches. Successful management of blue whales requires improved understanding of how fine-scale foraging ecology translates to population abundances. Specifically, sub-lethal factors such as anthropogenic noise and climate change, and lethal factors such as ship strikes may be limiting recovery and can be difficult to account for in current management strategies. Here we use an extensive dataset of fine-scale accelerometers (55) and broad-scale satellite tags (104) deployed on Northeast Pacific blue whales to examine the energetics of foraging, overlap with human risk, and projections of future habitat with climate change. We quantify the importance of dense prey patches (> 100 krill per cubic meter) for blue whale energetics and fitness. Distribution models can be used in concert with industry and regional offices to produce dynamic rules to reduce vessel interactions. We propose telemetry data are ripe for use in establishing dynamic management approaches that account for daily to seasonal management areas to minimize anthropogenic risks, and are also adaptable to long-term climate-driven changes in habitat.

  9. Entropy for the Complexity of Physiological Signal Dynamics.

    PubMed

    Zhang, Xiaohua Douglas

    2017-01-01

    Recently, the rapid development of large data storage technologies, mobile network technology, and portable medical devices makes it possible to measure, record, store, and track analysis of biological dynamics. Portable noninvasive medical devices are crucial to capture individual characteristics of biological dynamics. The wearable noninvasive medical devices and the analysis/management of related digital medical data will revolutionize the management and treatment of diseases, subsequently resulting in the establishment of a new healthcare system. One of the key features that can be extracted from the data obtained by wearable noninvasive medical device is the complexity of physiological signals, which can be represented by entropy of biological dynamics contained in the physiological signals measured by these continuous monitoring medical devices. Thus, in this chapter I present the major concepts of entropy that are commonly used to measure the complexity of biological dynamics. The concepts include Shannon entropy, Kolmogorov entropy, Renyi entropy, approximate entropy, sample entropy, and multiscale entropy. I also demonstrate an example of using entropy for the complexity of glucose dynamics.

  10. Disturbance dynamics and ecosystem-based forest management

    Treesearch

    Kalev Jogiste; W. Keith Moser; Malle Mandre

    2005-01-01

    Ecosystem-based management is intended to balance ecological, social and economic values of sustainable resource management. The desired future state of forest ecosystem is usually defined through productivity, biodiversity, stability or other terms. However, ecosystem-based management may produce an unbalanced emphasis on different components. Although ecosystem-based...

  11. Adaptive Workflows for Diabetes Management: Self-Management Assistant and Remote Treatment for Diabetes.

    PubMed

    Contreras, Iván; Kiefer, Stephan; Vehi, Josep

    2017-01-01

    Diabetes self-management is a crucial element for all people with diabetes and those at risk for developing the disease. Diabetic patients should be empowered to increase their self-management skills in order to prevent or delay the complications of diabetes. This work presents the proposal and first development stages of a smartphone application focused on the empowerment of the patients with diabetes. The concept of this interventional tool is based on the personalization of the user experience from an adaptive and dynamic perspective. The segmentation of the population and the dynamical treatment of user profiles among the different experience levels is the main challenge of the implementation. The self-management assistant and remote treatment for diabetes aims to develop a platform to integrate a series of innovative models and tools rigorously tested and supported by the research literature in diabetes together the use of a proved engine to manage workflows for healthcare.

  12. A Distributed Dynamic Programming-Based Solution for Load Management in Smart Grids

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Xu, Yinliang; Li, Sisi; Zhou, MengChu; Liu, Wenxin; Xu, Ying

    2018-03-01

    Load management is being recognized as an important option for active user participation in the energy market. Traditional load management methods usually require a centralized powerful control center and a two-way communication network between the system operators and energy end-users. The increasing user participation in smart grids may limit their applications. In this paper, a distributed solution for load management in emerging smart grids is proposed. The load management problem is formulated as a constrained optimization problem aiming at maximizing the overall utility of users while meeting the requirement for load reduction requested by the system operator, and is solved by using a distributed dynamic programming algorithm. The algorithm is implemented via a distributed framework and thus can deliver a highly desired distributed solution. It avoids the required use of a centralized coordinator or control center, and can achieve satisfactory outcomes for load management. Simulation results with various test systems demonstrate its effectiveness.

  13. Intelligent data management for real-time spacecraft monitoring

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.; Gasser, Les; Abramson, Bruce

    1992-01-01

    Real-time AI systems have begun to address the challenge of restructuring problem solving to meet real-time constraints by making key trade-offs that pursue less than optimal strategies with minimal impact on system goals. Several approaches for adapting to dynamic changes in system operating conditions are known. However, simultaneously adapting system decision criteria in a principled way has been difficult. Towards this end, a general technique for dynamically making such trade-offs using a combination of decision theory and domain knowledge has been developed. Multi-attribute utility theory (MAUT), a decision theoretic approach for making one-time decisions is discussed and dynamic trade-off evaluation is described as a knowledge-based extension of MAUT that is suitable for highly dynamic real-time environments, and provides an example of dynamic trade-off evaluation applied to a specific data management trade-off in a real-world spacecraft monitoring application.

  14. Analysis of structural dynamic data from Skylab. Volume 1: Technical discussion

    NASA Technical Reports Server (NTRS)

    Demchak, L.; Harcrow, H.

    1976-01-01

    A compendium of Skylab structural dynamics analytical and test programs is presented. These programs are assessed to identify lessons learned from the structural dynamic prediction effort and to provide guidelines for future analysts and program managers of complex spacecraft systems. It is a synopsis of the structural dynamic effort performed under the Skylab Integration contract and specifically covers the development, utilization, and correlation of Skylab Dynamic Orbital Models.

  15. Including policy and management in socio-hydrology models: initial conceptualizations

    NASA Astrophysics Data System (ADS)

    Hermans, Leon; Korbee, Dorien

    2017-04-01

    Socio-hydrology studies the interactions in coupled human-water systems. So far, the use of dynamic models that capture the direct feedback between societal and hydrological systems has been dominant. What has not yet been included with any particular emphasis, is the policy or management layer, which is a central element in for instance integrated water resources management (IWRM) or adaptive delta management (ADM). Studying the direct interactions between human-water systems generates knowledges that eventually helps influence these interactions in ways that may ensure better outcomes - for society and for the health and sustainability of water systems. This influence sometimes occurs through spontaneous emergence, uncoordinated by societal agents - private sector, citizens, consumers, water users. However, the term 'management' in IWRM and ADM also implies an additional coordinated attempt through various public actors. This contribution is a call to include the policy and management dimension more prominently into the research focus of the socio-hydrology field, and offers first conceptual variables that should be considered in attempts to include this policy or management layer in socio-hydrology models. This is done by drawing on existing frameworks to study policy processes throughout both planning and implementation phases. These include frameworks such as the advocacy coalition framework, collective learning and policy arrangements, which all emphasis longer-term dynamics and feedbacks between actor coalitions in strategic planning and implementation processes. A case about longter-term dynamics in the management of the Haringvliet in the Netherlands is used to illustrate the paper.

  16. Engine management during NTRE start up

    NASA Technical Reports Server (NTRS)

    Bulman, Mel; Saltzman, Dave

    1993-01-01

    The topics are presented in viewgraph form and include the following: total engine system management critical to successful nuclear thermal rocket engine (NTRE) start up; NERVA type engine start windows; reactor power control; heterogeneous reactor cooling; propellant feed system dynamics; integrated NTRE start sequence; moderator cooling loop and efficient NTRE starting; analytical simulation and low risk engine development; accurate simulation through dynamic coupling of physical processes; and integrated NTRE and mission performance.

  17. Dynamics of Interorganizational Coordination.

    DTIC Science & Technology

    1984-11-01

    AD-R152 613 DYNAMICS OF iNTERORGRNIZATIONRL COORDINRTON(U) 1/1 MINNESOTA UNIV MINNEAPOLIS STRATEGIC MANAGEMENT RESEARCH CENTER A H YEN ET AL. NOY 84...CEERRGNTER •~~~~ Andre N.V. ee 05-I Anre H.m Vt anagemen = University of Minnesota Gordon Walker Massachusetts Institute of Technology THE STRATEGIC ...1984 Strategic Management Research Center University of Minnesota. . -. ’.J6 Forthcoming in Administrative Science Quarterly, December, 1984. We

  18. Modeling water, carbon, and nitrogen dynamics for two drained pine plantations under intensive management practices

    Treesearch

    Shiying Tian; Mohamed A. Youssef; R. Wayne Skaggs; Devendra Amatya; George M. Chescheir

    2012-01-01

    This paper reports results of a study to test the reliability of the DRAINMOD-FOREST model for predicting water, soil carbon (C) and nitrogen (N) dynamics in intensively managed forests. The study site, two adjacent loblolly pine (Pinus taeda L.) plantations (referred as D2 and D3), are located in the coastal plain of North Carolina, USA. Controlled drainage (with weir...

  19. Group dynamics challenges: Insights from Biosphere 2 experiments

    NASA Astrophysics Data System (ADS)

    Nelson, Mark; Gray, Kathelin; Allen, John P.

    2015-07-01

    Successfully managing group dynamics of small, physically isolated groups is vital for long duration space exploration/habitation and for terrestrial CELSS (Controlled Environmental Life Support System) facilities with human participants. Biosphere 2 had important differences and shares some key commonalities with both Antarctic and space environments. There were a multitude of stress factors during the first two year closure experiment as well as mitigating factors. A helpful tool used at Biosphere 2 was the work of W.R. Bion who identified two competing modalities of behavior in small groups. Task-oriented groups are governed by conscious acceptance of goals, reality-thinking in relation to time and resources, and intelligent management of challenges. The opposing unconscious mode, the "basic-assumption" ("group animal") group, manifests through Dependency/Kill the Leader, Fight/Flight and Pairing. These unconscious dynamics undermine and can defeat the task group's goal. The biospherians experienced some dynamics seen in other isolated teams: factions developing reflecting personal chemistry and disagreements on overall mission procedures. These conflicts were exacerbated by external power struggles which enlisted support of those inside. Nevertheless, the crew evolved a coherent, creative life style to deal with some of the deprivations of isolation. The experience of the first two year closure of Biosphere 2 vividly illustrates both vicissitudes and management of group dynamics. The crew overrode inevitable frictions to creatively manage both operational and research demands and opportunities of the facility, thus staying 'on task' in Bion's group dynamics terminology. The understanding that Biosphere 2 was their life support system may also have helped the mission to succeed. Insights from the Biosphere 2 experience can help space and remote missions cope successfully with the inherent challenges of small, isolated crews.

  20. Multistate modeling of habitat dynamics: Factors affecting Florida scrub transition probabilities

    USGS Publications Warehouse

    Breininger, D.R.; Nichols, J.D.; Duncan, B.W.; Stolen, Eric D.; Carter, G.M.; Hunt, D.K.; Drese, J.H.

    2010-01-01

    Many ecosystems are influenced by disturbances that create specific successional states and habitat structures that species need to persist. Estimating transition probabilities between habitat states and modeling the factors that influence such transitions have many applications for investigating and managing disturbance-prone ecosystems. We identify the correspondence between multistate capture-recapture models and Markov models of habitat dynamics. We exploit this correspondence by fitting and comparing competing models of different ecological covariates affecting habitat transition probabilities in Florida scrub and flatwoods, a habitat important to many unique plants and animals. We subdivided a large scrub and flatwoods ecosystem along central Florida's Atlantic coast into 10-ha grid cells, which approximated average territory size of the threatened Florida Scrub-Jay (Aphelocoma coerulescens), a management indicator species. We used 1.0-m resolution aerial imagery for 1994, 1999, and 2004 to classify grid cells into four habitat quality states that were directly related to Florida Scrub-Jay source-sink dynamics and management decision making. Results showed that static site features related to fire propagation (vegetation type, edges) and temporally varying disturbances (fires, mechanical cutting) best explained transition probabilities. Results indicated that much of the scrub and flatwoods ecosystem was resistant to moving from a degraded state to a desired state without mechanical cutting, an expensive restoration tool. We used habitat models parameterized with the estimated transition probabilities to investigate the consequences of alternative management scenarios on future habitat dynamics. We recommend this multistate modeling approach as being broadly applicable for studying ecosystem, land cover, or habitat dynamics. The approach provides maximum-likelihood estimates of transition parameters, including precision measures, and can be used to assess evidence among competing ecological models that describe system dynamics. ?? 2010 by the Ecological Society of America.

  1. Incorporating Social System Dynamics into the Food-Energy-Water System Resilience-Sustainability Modeling Process

    NASA Astrophysics Data System (ADS)

    Givens, J.; Padowski, J.; Malek, K.; Guzman, C.; Boll, J.; Adam, J. C.; Witinok-Huber, R.

    2017-12-01

    In the face of climate change and multi-scalar governance objectives, achieving resilience of food-energy-water (FEW) systems requires interdisciplinary approaches. Through coordinated modeling and management efforts, we study "Innovations in the Food-Energy-Water Nexus (INFEWS)" through a case-study in the Columbia River Basin. Previous research on FEW system management and resilience includes some attention to social dynamics (e.g., economic, governance); however, more research is needed to better address social science perspectives. Decisions ultimately taken in this river basin would occur among stakeholders encompassing various institutional power structures including multiple U.S. states, tribal lands, and sovereign nations. The social science lens draws attention to the incompatibility between the engineering definition of resilience (i.e., return to equilibrium or a singular stable state) and the ecological and social system realities, more explicit in the ecological interpretation of resilience (i.e., the ability of a system to move into a different, possibly more resilient state). Social science perspectives include but are not limited to differing views on resilience as normative, system persistence versus transformation, and system boundary issues. To expand understanding of resilience and objectives for complex and dynamic systems, concepts related to inequality, heterogeneity, power, agency, trust, values, culture, history, conflict, and system feedbacks must be more tightly integrated into FEW research. We identify gaps in knowledge and data, and the value and complexity of incorporating social components and processes into systems models. We posit that socio-biophysical system resilience modeling would address important complex, dynamic social relationships, including non-linear dynamics of social interactions, to offer an improved understanding of sustainable management in FEW systems. Conceptual modeling that is presented in our study, represents a starting point for a continued research agenda that incorporates social dynamics into FEW system resilience and management.

  2. Group dynamics challenges: Insights from Biosphere 2 experiments.

    PubMed

    Nelson, Mark; Gray, Kathelin; Allen, John P

    2015-07-01

    Successfully managing group dynamics of small, physically isolated groups is vital for long duration space exploration/habitation and for terrestrial CELSS (Controlled Environmental Life Support System) facilities with human participants. Biosphere 2 had important differences and shares some key commonalities with both Antarctic and space environments. There were a multitude of stress factors during the first two year closure experiment as well as mitigating factors. A helpful tool used at Biosphere 2 was the work of W.R. Bion who identified two competing modalities of behavior in small groups. Task-oriented groups are governed by conscious acceptance of goals, reality-thinking in relation to time and resources, and intelligent management of challenges. The opposing unconscious mode, the "basic-assumption" ("group animal") group, manifests through Dependency/Kill the Leader, Fight/Flight and Pairing. These unconscious dynamics undermine and can defeat the task group's goal. The biospherians experienced some dynamics seen in other isolated teams: factions developing reflecting personal chemistry and disagreements on overall mission procedures. These conflicts were exacerbated by external power struggles which enlisted support of those inside. Nevertheless, the crew evolved a coherent, creative life style to deal with some of the deprivations of isolation. The experience of the first two year closure of Biosphere 2 vividly illustrates both vicissitudes and management of group dynamics. The crew overrode inevitable frictions to creatively manage both operational and research demands and opportunities of the facility, thus staying 'on task' in Bion's group dynamics terminology. The understanding that Biosphere 2 was their life support system may also have helped the mission to succeed. Insights from the Biosphere 2 experience can help space and remote missions cope successfully with the inherent challenges of small, isolated crews. Copyright © 2015 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.

  3. Control of water distribution networks with dynamic DMA topology using strictly feasible sequential convex programming

    NASA Astrophysics Data System (ADS)

    Wright, Robert; Abraham, Edo; Parpas, Panos; Stoianov, Ivan

    2015-12-01

    The operation of water distribution networks (WDN) with a dynamic topology is a recently pioneered approach for the advanced management of District Metered Areas (DMAs) that integrates novel developments in hydraulic modeling, monitoring, optimization, and control. A common practice for leakage management is the sectorization of WDNs into small zones, called DMAs, by permanently closing isolation valves. This facilitates water companies to identify bursts and estimate leakage levels by measuring the inlet flow for each DMA. However, by permanently closing valves, a number of problems have been created including reduced resilience to failure and suboptimal pressure management. By introducing a dynamic topology to these zones, these disadvantages can be eliminated while still retaining the DMA structure for leakage monitoring. In this paper, a novel optimization method based on sequential convex programming (SCP) is outlined for the control of a dynamic topology with the objective of reducing average zone pressure (AZP). A key attribute for control optimization is reliable convergence. To achieve this, the SCP method we propose guarantees that each optimization step is strictly feasible, resulting in improved convergence properties. By using a null space algorithm for hydraulic analyses, the computations required are also significantly reduced. The optimized control is actuated on a real WDN operated with a dynamic topology. This unique experimental program incorporates a number of technologies set up with the objective of investigating pioneering developments in WDN management. Preliminary results indicate AZP reductions for a dynamic topology of up to 6.5% over optimally controlled fixed topology DMAs. This article was corrected on 12 JAN 2016. See the end of the full text for details.

  4. Development of dynamic Bayesian models for web application test management

    NASA Astrophysics Data System (ADS)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  5. Tether dynamics and control results for tethered satellite system's initial flight

    NASA Astrophysics Data System (ADS)

    Chapel, Jim D.; Flanders, Howard

    The recent Tethered Satellite System-1 (TSS-1) mission has provided a wealth of data concerning the dynamics of tethered systems in space and has demonstrated the effectiveness of operational techniques designed to control these dynamics. In this paper, we review control techniques developed for managing tether dynamics, and discuss the results of using these techniques for the Tethered Satellite System's maiden flight on STS-46. In particular, the flight results of controlling libration dynamics, string dynamics, and slack tether are presented. These results show that tether dynamics can be safely managed. The overall stability of the system was found to be surprisingly good even at relatively short tether lengths. In fact, the system operated in passive mode at a tether length of 256 meters for over 9 hours. Only monitoring of the system was required during this time. Although flight anomalies prevented the planned deployment to 20 km, the extended operations at shorter tether lengths have proven the viability of using tethers in space. These results should prove invaluable in preparing for future missions with tethered objects in space.

  6. Tether dynamics and control results for tethered satellite system's initial flight

    NASA Technical Reports Server (NTRS)

    Chapel, Jim D.; Flanders, Howard

    1993-01-01

    The recent Tethered Satellite System-1 (TSS-1) mission has provided a wealth of data concerning the dynamics of tethered systems in space and has demonstrated the effectiveness of operational techniques designed to control these dynamics. In this paper, we review control techniques developed for managing tether dynamics, and discuss the results of using these techniques for the Tethered Satellite System's maiden flight on STS-46. In particular, the flight results of controlling libration dynamics, string dynamics, and slack tether are presented. These results show that tether dynamics can be safely managed. The overall stability of the system was found to be surprisingly good even at relatively short tether lengths. In fact, the system operated in passive mode at a tether length of 256 meters for over 9 hours. Only monitoring of the system was required during this time. Although flight anomalies prevented the planned deployment to 20 km, the extended operations at shorter tether lengths have proven the viability of using tethers in space. These results should prove invaluable in preparing for future missions with tethered objects in space.

  7. System dynamic modeling on construction waste management in Shenzhen, China.

    PubMed

    Tam, Vivian W Y; Li, Jingru; Cai, Hong

    2014-05-01

    This article examines the complexity of construction waste management in Shenzhen, Mainland China. In-depth analysis of waste generation, transportation, recycling, landfill and illegal dumping of various inherent management phases is explored. A system dynamics modeling using Stella model is developed. Effects of landfill charges and also penalties from illegal dumping are also simulated. The results show that the implementation of comprehensive policy on both landfill charges and illegal dumping can effectively control the illegal dumping behavior, and achieve comprehensive construction waste minimization. This article provides important recommendations for effective policy implementation and explores new perspectives for Shenzhen policy makers.

  8. Some design constraints required for the use of generic software in embedded systems: Packages which manage abstract dynamic structures without the need for garbage collection

    NASA Technical Reports Server (NTRS)

    Johnson, Charles S.

    1986-01-01

    The embedded systems running real-time applications, for which Ada was designed, require their own mechanisms for the management of dynamically allocated storage. There is a need for packages which manage their own internalo structures to control their deallocation as well, due to the performance implications of garbage collection by the KAPSE. This places a requirement upon the design of generic packages which manage generically structured private types built-up from application-defined input types. These kinds of generic packages should figure greatly in the development of lower-level software such as operating systems, schedulers, controllers, and device driver; and will manage structures such as queues, stacks, link-lists, files, and binary multary (hierarchical) trees. Controlled to prevent inadvertent de-designation of dynamic elements, which is implicit in the assignment operation A study was made of the use of limited private type, in solving the problems of controlling the accumulation of anonymous, detached objects in running systems. The use of deallocator prodecures for run-down of application-defined input types during deallocation operations during satellites.

  9. Predictive Habitat Use of California Sea Lions and Its Implications for Fisheries Management

    NASA Astrophysics Data System (ADS)

    Briscoe, D.

    2016-02-01

    Advancements in satellite telemetry and remotely-sensed oceanography have shown that species and the environment they utilize are highly dynamic in space and time. However, biophysical features often overlap with human use. For this reason, spatially-explicit management approaches may only provide a snapshot of protection for a highly mobile species throughout its range. As a migratory species, California sea lions (Zalophus californianus) utilize dynamic oceanographic features that overlap with the California swordfish fishery, and are subject to incidental catch. The development of near-real time tools can assist in management efforts to mitigate against human impacts, such as fisheries interactions and dynamic marine species. Here, we combine near-real time remotely-sensed satellite oceanography, animal tracking data, and Generalized Additive Mixed Models (GAMMs) to: a) determine suitable habitat for 75 female California sea lions throughout their range, b) forecast when and where these non-target interactions are likely to occur, and c) validate these models with observed data of such interactions. Model results can be used to provide resource management that are highly responsive to the movement of managed species, ocean users, and underlying ocean features.

  10. Predictive Habitat Use of California Sea Lions and Its Implications for Fisheries Management

    NASA Astrophysics Data System (ADS)

    Briscoe, D.

    2016-12-01

    Advancements in satellite telemetry and remotely-sensed oceanography have shown that species and the environment they utilize are highly dynamic in space and time. However, biophysical features often overlap with human use. For this reason, spatially-explicit management approaches may only provide a snapshot of protection for a highly mobile species throughout its range. As a migratory species, California sea lions (Zalophus californianus) utilize dynamic oceanographic features that overlap with the California swordfish fishery, and are subject to incidental catch. The development of near-real time tools can assist in management efforts to mitigate against human impacts, such as fisheries interactions and dynamic marine species. Here, we combine near-real time remotely-sensed satellite oceanography, animal tracking data, and Generalized Additive Mixed Models (GAMMs) to: a) determine suitable habitat for 75 female California sea lions throughout their range, b) forecast when and where these non-target interactions are likely to occur, and c) validate these models with observed data of such interactions. Model results can be used to provide resource management that are highly responsive to the movement of managed species, ocean users, and underlying ocean features.

  11. Physician’ entrepreneurship explained: a case study of intra-organizational dynamics in Dutch hospitals and specialty clinics

    PubMed Central

    2014-01-01

    Background Challenges brought about by developments such as continuing market reforms and budget reductions have strained the relation between managers and physicians in hospitals. By applying neo-institutional theory, we research how intra-organizational dynamics between physicians and managers induce physicians to become entrepreneurs by starting a specialty clinic. In addition, we determine the nature of this change by analyzing the intra-organizational dynamics in both hospitals and clinics. Methods For our research, we interviewed a total of fifteen physicians and eight managers in four hospitals and twelve physicians and seven managers in twelve specialty clinics. Results We found evidence that in becoming entrepreneurs, physicians are influenced by intra-organizational dynamics, including power dependence, interest dissatisfaction, and value commitments, between physicians and managers as well as among physicians’ groups. The precise motivation for starting a new clinic can vary depending on the medical or business logic in which the entrepreneurs are embedded, but also the presence of an entrepreneurial nature or nurture. Finally we found that the entrepreneurial process of starting a specialty clinic is a process of sedimented change or hybridized professionalism in which elements of the business logic are added to the existing logic of medical professionalism, leading to a hybrid logic. Conclusions These findings have implications for policy at both the national and hospital level. Shared ownership and aligned incentives may provide the additional cement in which the developing entrepreneurial values are ‘glued’ to the central medical logic. PMID:24885912

  12. Physician' entrepreneurship explained: a case study of intra-organizational dynamics in Dutch hospitals and specialty clinics.

    PubMed

    Koelewijn, Wout T; de Rover, Matthijs; Ehrenhard, Michel L; van Harten, Wim H

    2014-05-19

    Challenges brought about by developments such as continuing market reforms and budget reductions have strained the relation between managers and physicians in hospitals. By applying neo-institutional theory, we research how intra-organizational dynamics between physicians and managers induce physicians to become entrepreneurs by starting a specialty clinic. In addition, we determine the nature of this change by analyzing the intra-organizational dynamics in both hospitals and clinics. For our research, we interviewed a total of fifteen physicians and eight managers in four hospitals and twelve physicians and seven managers in twelve specialty clinics. We found evidence that in becoming entrepreneurs, physicians are influenced by intra-organizational dynamics, including power dependence, interest dissatisfaction, and value commitments, between physicians and managers as well as among physicians' groups. The precise motivation for starting a new clinic can vary depending on the medical or business logic in which the entrepreneurs are embedded, but also the presence of an entrepreneurial nature or nurture. Finally we found that the entrepreneurial process of starting a specialty clinic is a process of sedimented change or hybridized professionalism in which elements of the business logic are added to the existing logic of medical professionalism, leading to a hybrid logic. These findings have implications for policy at both the national and hospital level. Shared ownership and aligned incentives may provide the additional cement in which the developing entrepreneurial values are 'glued' to the central medical logic.

  13. Design an optimum safety policy for personnel safety management - A system dynamic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaji, P.

    2014-10-06

    Personnel safety management (PSM) ensures that employee's work conditions are healthy and safe by various proactive and reactive approaches. Nowadays it is a complex phenomenon because of increasing dynamic nature of organisations which results in an increase of accidents. An important part of accident prevention is to understand the existing system properly and make safety strategies for that system. System dynamics modelling appears to be an appropriate methodology to explore and make strategy for PSM. Many system dynamics models of industrial systems have been built entirely for specific host firms. This thesis illustrates an alternative approach. The generic system dynamicsmore » model of Personnel safety management was developed and tested in a host firm. The model was undergone various structural, behavioural and policy tests. The utility and effectiveness of model was further explored through modelling a safety scenario. In order to create effective safety policy under resource constraint, DOE (Design of experiment) was used. DOE uses classic designs, namely, fractional factorials and central composite designs. It used to make second order regression equation which serve as an objective function. That function was optimized under budget constraint and optimum value used for safety policy which shown greatest improvement in overall PSM. The outcome of this research indicates that personnel safety management model has the capability for acting as instruction tool to improve understanding of safety management and also as an aid to policy making.« less

  14. Simulating forest fuel and fire risk dynamics across landscapes--LANDIS fuel module design

    Treesearch

    Hong S. He; Bo Z. Shang; Thomas R. Crow; Eric J. Gustafson; Stephen R. Shifley

    2004-01-01

    Understanding fuel dynamics over large spatial (103-106 ha) and temporal scales (101-103 years) is important in comprehensive wildfire management. We present a modeling approach to simulate fuel and fire risk dynamics as well as impacts of alternative fuel treatments. The...

  15. Disturbance, scale, and boundary in wilderness management

    Treesearch

    Peter S. White; Jonathan Harrod; Joan L. Walker; Anke Jentsch

    2000-01-01

    Natural disturbances are critical to wilderness management. This paper reviews recent research on natural disturbance and addresses the problem of managing for disturbances in a world of human-imposed scales and boundaries. The dominant scale issue in disturbance management is the question of patch dynamic equilibrium. The dominant boundary issue in disturbance...

  16. Disturbance, Scale, and Boundary in Wilderness Management

    Treesearch

    Peter S. White; Jonathan Harrod; Joan L. Walker; Anke Jentsch

    2000-01-01

    Natural disturbances are critical to wilderness management. This paper reviews recent research on natural disturbance and addresses the problem of managing for disturbances in a world of human-imposed scales and boundaries. The dominant scale issue in disturbance management is the question of patch dynamic equilibrium. The dominant boundary issue in disturbance...

  17. SUSTAINABLE MSW MANAGEMENT STRATEGIES IN THE UNITED STATES

    EPA Science Inventory

    Under increasing pressure to minimize potential environmental burdens and costs for municipal solid waste (MSW) management, state and local governments often must modify programs and adopt more efficient integrated MSW management strategies that reflect dynamic shifts in MSW mana...

  18. Microworlds of the dynamic balanced scorecard for university (DBSC-UNI)

    NASA Astrophysics Data System (ADS)

    Hawari, Nurul Nazihah; Tahar, Razman Mat

    2015-12-01

    This research focuses on the development of a Microworlds of the dynamic balanced scorecard for university in order to enhance the university strategic planning process. To develop the model, we integrated both the balanced scorecard method and the system dynamics modelling method. Contrasting the traditional university planning tools, the developed model addresses university management problems holistically and dynamically. It is found that using system dynamics modelling method, the cause-and-effect relationships among variables related to the four conventional balanced scorecard perspectives are better understand. The dynamic processes that give rise to performance differences between targeted and actual performances also could be better understood. So, it is expected that the quality of the decisions taken are improved because of being better informed. The developed Microworlds can be exploited by university management to design policies that can positively influence the future in the direction of desired goals, and will have minimal side effects. This paper integrates balanced scorecard and system dynamics modelling methods in analyzing university performance. Therefore, this paper demonstrates the effectiveness and strength of system dynamics modelling method in solving problem in strategic planning area particularly in higher education sector.

  19. A crowd of pedestrian dynamics - The perspective of physics. Comment on "Human behaviours in evacuation crowd dynamics: From modelling to "big data" toward crisis management" by Nicola Bellomo et al.

    NASA Astrophysics Data System (ADS)

    Miguel, António F.

    2016-09-01

    Walking is the most basic form of transportation. A good understanding of pedestrian's dynamics is essential in meeting the mobility and accessibility needs of people by providing a safe and quick walking flow [1]. Advances in the dynamics of pedestrians in crowds are of great theoretical and practical interest, as they lead to new insights regarding the planning of pedestrian facilities, crowd management, or evacuation analysis. Nicola Bellomo's et al. article [2] is a very timely review of the related research on modelling approaches, computational simulations, decision-making and crisis response. It also includes an attempt to accurately define commonly used terms, as well as a critical analysis of crowd dynamics and safety problems. As noted by the authors, ;models and simulations offer a virtual representation of real dynamics; that are essential to understand and predict the ;behavioural dynamics of crowds; [2]. As a physicist, I would like to put forward some additional theoretical and practical contributions that could be interesting to explore, regarding the perspective of physics on about human crowd dynamics (panic as a specific form of behaviour excluded).

  20. Rapid geodesic mapping of brain functional connectivity: implementation of a dedicated co-processor in a field-programmable gate array (FPGA) and application to resting state functional MRI.

    PubMed

    Minati, Ludovico; Cercignani, Mara; Chan, Dennis

    2013-10-01

    Graph theory-based analyses of brain network topology can be used to model the spatiotemporal correlations in neural activity detected through fMRI, and such approaches have wide-ranging potential, from detection of alterations in preclinical Alzheimer's disease through to command identification in brain-machine interfaces. However, due to prohibitive computational costs, graph-based analyses to date have principally focused on measuring connection density rather than mapping the topological architecture in full by exhaustive shortest-path determination. This paper outlines a solution to this problem through parallel implementation of Dijkstra's algorithm in programmable logic. The processor design is optimized for large, sparse graphs and provided in full as synthesizable VHDL code. An acceleration factor between 15 and 18 is obtained on a representative resting-state fMRI dataset, and maps of Euclidean path length reveal the anticipated heterogeneous cortical involvement in long-range integrative processing. These results enable high-resolution geodesic connectivity mapping for resting-state fMRI in patient populations and real-time geodesic mapping to support identification of imagined actions for fMRI-based brain-machine interfaces. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. Initial results on computational performance of Intel Many Integrated Core (MIC) architecture: implementation of the Weather and Research Forecasting (WRF) Purdue-Lin microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.

  2. The LHCb software and computing upgrade for Run 3: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Bozzi, C.; Roiser, S.; LHCb Collaboration

    2017-10-01

    The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.

  3. Multimedia architectures: from desktop systems to portable appliances

    NASA Astrophysics Data System (ADS)

    Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.

    1997-01-01

    Future desktop and portable computing systems will have as their core an integrated multimedia system. Such a system will seamlessly combine digital video, digital audio, computer animation, text, and graphics. Furthermore, such a system will allow for mixed-media creation, dissemination, and interactive access in real time. Multimedia architectures that need to support these functions have traditionally required special display and processing units for the different media types. This approach tends to be expensive and is inefficient in its use of silicon. Furthermore, such media-specific processing units are unable to cope with the fluid nature of the multimedia market wherein the needs and standards are changing and system manufacturers may demand a single component media engine across a range of products. This constraint has led to a shift towards providing a single-component multimedia specific computing engine that can be integrated easily within desktop systems, tethered consumer appliances, or portable appliances. In this paper, we review some of the recent architectural efforts in developing integrated media systems. We primarily focus on two efforts, namely the evolution of multimedia-capable general purpose processors and a more recent effort in developing single component mixed media co-processors. Design considerations that could facilitate the migration of these technologies to a portable integrated media system also are presented.

  4. Time-efficient simulations of tight-binding electronic structures with Intel Xeon PhiTM many-core processors

    NASA Astrophysics Data System (ADS)

    Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam

    2016-12-01

    Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.

  5. Dynamic Transportation Navigation

    NASA Astrophysics Data System (ADS)

    Meng, Xiaofeng; Chen, Jidong

    Miniaturization of computing devices, and advances in wireless communication and sensor technology are some of the forces that are propagating computing from the stationary desktop to the mobile outdoors. Some important classes of new applications that will be enabled by this revolutionary development include intelligent traffic management, location-based services, tourist services, mobile electronic commerce, and digital battlefield. Some existing application classes that will benefit from the development include transportation and air traffic control, weather forecasting, emergency response, mobile resource management, and mobile workforce. Location management, i.e., the management of transient location information, is an enabling technology for all these applications. In this chapter, we present the applications of moving objects management and their functionalities, in particular, the application of dynamic traffic navigation, which is a challenge due to the highly variable traffic state and the requirement of fast, on-line computations.

  6. Dynamic sensor management of dispersed and disparate sensors for tracking resident space objects

    NASA Astrophysics Data System (ADS)

    El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Donatelli, D.

    2008-04-01

    Dynamic sensor management of dispersed and disparate sensors for space situational awareness presents daunting scientific and practical challenges as it requires optimal and accurate maintenance of all Resident Space Objects (RSOs) of interest. We demonstrate an approach to the space-based sensor management problem by extending a previously developed and tested sensor management objective function, the Posterior Expected Number of Targets (PENT), to disparate and dispersed sensors. This PENT extension together with observation models for various sensor platforms, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker provide a powerful tool for tackling this challenging problem. We demonstrate the approach using simulations for tracking RSOs by a Space Based Visible (SBV) sensor and ground based radars.

  7. Influence of Different Forest System Management Practices on Leaf Litter Decomposition Rates, Nutrient Dynamics and the Activity of Ligninolytic Enzymes: A Case Study from Central European Forests

    PubMed Central

    Schulz, Elke; Schloter, Michael; Buscot, François; Hofrichter, Martin; Krüger, Dirk

    2014-01-01

    Leaf litter decomposition is the key ecological process that determines the sustainability of managed forest ecosystems, however very few studies hitherto have investigated this process with respect to silvicultural management practices. The aims of the present study were to investigate the effects of forest management practices on leaf litter decomposition rates, nutrient dynamics (C, N, Mg, K, Ca, P) and the activity of ligninolytic enzymes. We approached these questions using a 473 day long litterbag experiment. We found that age-class beech and spruce forests (high forest management intensity) had significantly higher decomposition rates and nutrient release (most nutrients) than unmanaged deciduous forest reserves (P<0.05). The site with near-to-nature forest management (low forest management intensity) exhibited no significant differences in litter decomposition rate, C release, lignin decomposition, and C/N, lignin/N and ligninolytic enzyme patterns compared to the unmanaged deciduous forest reserves, but most nutrient dynamics examined in this study were significantly faster under such near-to-nature forest management practices. Analyzing the activities of ligninolytic enzymes provided evidence that different forest system management practices affect litter decomposition by changing microbial enzyme activities, at least over the investigated time frame of 473 days (laccase, P<0.0001; manganese peroxidase (MnP), P = 0.0260). Our results also indicate that lignin decomposition is the rate limiting step in leaf litter decomposition and that MnP is one of the key oxidative enzymes of litter degradation. We demonstrate here that forest system management practices can significantly affect important ecological processes and services such as decomposition and nutrient cycling. PMID:24699676

  8. Influence of different forest system management practices on leaf litter decomposition rates, nutrient dynamics and the activity of ligninolytic enzymes: a case study from central European forests.

    PubMed

    Purahong, Witoon; Kapturska, Danuta; Pecyna, Marek J; Schulz, Elke; Schloter, Michael; Buscot, François; Hofrichter, Martin; Krüger, Dirk

    2014-01-01

    Leaf litter decomposition is the key ecological process that determines the sustainability of managed forest ecosystems, however very few studies hitherto have investigated this process with respect to silvicultural management practices. The aims of the present study were to investigate the effects of forest management practices on leaf litter decomposition rates, nutrient dynamics (C, N, Mg, K, Ca, P) and the activity of ligninolytic enzymes. We approached these questions using a 473 day long litterbag experiment. We found that age-class beech and spruce forests (high forest management intensity) had significantly higher decomposition rates and nutrient release (most nutrients) than unmanaged deciduous forest reserves (P<0.05). The site with near-to-nature forest management (low forest management intensity) exhibited no significant differences in litter decomposition rate, C release, lignin decomposition, and C/N, lignin/N and ligninolytic enzyme patterns compared to the unmanaged deciduous forest reserves, but most nutrient dynamics examined in this study were significantly faster under such near-to-nature forest management practices. Analyzing the activities of ligninolytic enzymes provided evidence that different forest system management practices affect litter decomposition by changing microbial enzyme activities, at least over the investigated time frame of 473 days (laccase, P<0.0001; manganese peroxidase (MnP), P = 0.0260). Our results also indicate that lignin decomposition is the rate limiting step in leaf litter decomposition and that MnP is one of the key oxidative enzymes of litter degradation. We demonstrate here that forest system management practices can significantly affect important ecological processes and services such as decomposition and nutrient cycling.

  9. Computer-aided software development process design

    NASA Technical Reports Server (NTRS)

    Lin, Chi Y.; Levary, Reuven R.

    1989-01-01

    The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.

  10. Simulating forest management and its effect on landscape pattern

    Treesearch

    Eric J. Gustafson

    2017-01-01

    Landscapes are characterized by their structure (the spatial arrangement of landscape elements), their ecological function (how ecological processes operate within that structure), and the dynamics of change (disturbance and recovery). Thus, understanding the dynamic nature of landscapes and predicting their future dynamics are of particular emphasis. Landscape change...

  11. Using Approximate Dynamic Programming to Solve the Military Inventory Routing Problem with Direct Delivery

    DTIC Science & Technology

    2015-03-26

    benefit by no longer having to allocate resources to inventory management . When the inventory routing problem is solved , three key decisions are made at...industries rely on the transportation and manage – ment of goods. To aid in understanding the formulation and techniques for solving the military inventory...Using Approximate Dynamic Programming to Solve the Military Inventory Routing Problem with Direct Delivery THESIS MARCH 2015 Rebekah S. McKenna

  12. An Application of System Dynamics Analysis to Ecosystem Management at the Poinsett Weapons Range, Shaw AFB SC

    DTIC Science & Technology

    1997-12-01

    younger trees with thinner sapwood and greater heartwood diameter than other trees in the area . The RCW requires approximately 6 inches in diameter...and management areas and have explored applications of system dynamics modeling at the graduate level. The attached application addresses specific...Population Sensitivity to Birth Rate 62 10. RCW Population with Variation of Relatedness Entity 64 11. RCW Population with Variation of Foraging Area

  13. Dynamic Trust Management for Delay Tolerant Networks and Its Application to Secure Routing

    DTIC Science & Technology

    2012-09-28

    population of misbehaving nodes or evolving hostility or social relations such that an application (e.g., secure routing) built on top of trust...optimization in DTNs in response to dynamically changing conditions such as increasing population of misbehaving nodes. The design part addresses the...The rest of the paper is organized as follows. In Section 2, we survey existing trust management protocols and approaches to deal with misbehaved

  14. Software life cycle dynamic simulation model: The organizational performance submodel

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1985-01-01

    The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.

  15. Dynamic Airspace Configuration

    NASA Technical Reports Server (NTRS)

    Bloem, Michael J.

    2014-01-01

    In air traffic management systems, airspace is partitioned into regions in part to distribute the tasks associated with managing air traffic among different systems and people. These regions, as well as the systems and people allocated to each, are changed dynamically so that air traffic can be safely and efficiently managed. It is expected that new air traffic control systems will enable greater flexibility in how airspace is partitioned and how resources are allocated to airspace regions. In this talk, I will begin by providing an overview of some previous work and open questions in Dynamic Airspace Configuration research, which is concerned with how to partition airspace and assign resources to regions of airspace. For example, I will introduce airspace partitioning algorithms based on clustering, integer programming optimization, and computational geometry. I will conclude by discussing the development of a tablet-based tool that is intended to help air traffic controller supervisors configure airspace and controllers in current operations.

  16. Enterprise-wide worklist management.

    PubMed

    Locko, Roberta C; Blume, Hartwig; Goble, John C

    2002-01-01

    Radiologists in multi-facility health care delivery networks must serve not only their own departments but also departments of associated clinical facilities. We describe our experience with a picture archiving and communication system (PACS) implementation that provides a dynamic view of relevant radiological workload across multiple facilities. We implemented a distributed query system that permits management of enterprise worklists based on modality, body part, exam status, and other criteria that span multiple compatible PACSs. Dynamic worklists, with lesser flexibility, can be constructed if the incompatible PACSs support specific DICOM functionality. Enterprise-wide worklists were implemented across Generations Plus/Northern Manhattan Health Network, linking radiology departments of three hospitals (Harlem, Lincoln, and Metropolitan) with 1465 beds and 4260 ambulatory patients per day. Enterprise-wide, dynamic worklist management improves utilization of radiologists and enhances the quality of care across large multi-facility health care delivery organizations. Integration of other workflow-related components remain a significant challenge.

  17. Dynamic analysis for solid waste management systems: an inexact multistage integer programming approach.

    PubMed

    Li, Yongping; Huang, Guohe

    2009-03-01

    In this study, a dynamic analysis approach based on an inexact multistage integer programming (IMIP) model is developed for supporting municipal solid waste (MSW) management under uncertainty. Techniques of interval-parameter programming and multistage stochastic programming are incorporated within an integer-programming framework. The developed IMIP can deal with uncertainties expressed as probability distributions and interval numbers, and can reflect the dynamics in terms of decisions for waste-flow allocation and facility-capacity expansion over a multistage context. Moreover, the IMIP can be used for analyzing various policy scenarios that are associated with different levels of economic consequences. The developed method is applied to a case study of long-term waste-management planning. The results indicate that reasonable solutions have been generated for binary and continuous variables. They can help generate desired decisions of system-capacity expansion and waste-flow allocation with a minimized system cost and maximized system reliability.

  18. Disease effects on lobster fisheries, ecology, and culture: overview of DAO Special 6.

    PubMed

    Behringer, Donald C; Butler, Mark J; Stentiford, Grant D

    2012-08-27

    Lobsters are prized by commercial and recreational fishermen worldwide, and their populations are therefore buffeted by fishery practices. But lobsters also remain integral members of their benthic communities where predator-prey relationships, competitive interactions, and host-pathogen dynamics push and pull at their population dynamics. Although lobsters have few reported pathogens and parasites relative to other decapod crustaceans, the rise of diseases with consequences for lobster fisheries and aquaculture has spotlighted the importance of disease for lobster biology, population dynamics and ecology. Researchers, managers, and fishers thus increasingly recognize the need to understand lobster pathogens and parasites so they can be managed proactively and their impacts minimized where possible. At the 2011 International Conference and Workshop on Lobster Biology and Management a special session on lobster diseases was convened and this special issue of Diseases of Aquatic Organisms highlights those proceedings with a suite of articles focused on diseases discussed during that session.

  19. BEEHAVE: a systems model of honeybee colony dynamics and foraging to explore multifactorial causes of colony failure

    PubMed Central

    Becher, Matthias A; Grimm, Volker; Thorbek, Pernille; Horn, Juliane; Kennedy, Peter J; Osborne, Juliet L

    2014-01-01

    A notable increase in failure of managed European honeybee Apis mellifera L. colonies has been reported in various regions in recent years. Although the underlying causes remain unclear, it is likely that a combination of stressors act together, particularly varroa mites and other pathogens, forage availability and potentially pesticides. It is experimentally challenging to address causality at the colony scale when multiple factors interact. In silico experiments offer a fast and cost-effective way to begin to address these challenges and inform experiments. However, none of the published bee models combine colony dynamics with foraging patterns and varroa dynamics. We have developed a honeybee model, BEEHAVE, which integrates colony dynamics, population dynamics of the varroa mite, epidemiology of varroa-transmitted viruses and allows foragers in an agent-based foraging model to collect food from a representation of a spatially explicit landscape. We describe the model, which is freely available online (www.beehave-model.net). Extensive sensitivity analyses and tests illustrate the model's robustness and realism. Simulation experiments with various combinations of stressors demonstrate, in simplified landscape settings, the model's potential: predicting colony dynamics and potential losses with and without varroa mites under different foraging conditions and under pesticide application. We also show how mitigation measures can be tested. Synthesis and applications. BEEHAVE offers a valuable tool for researchers to design and focus field experiments, for regulators to explore the relative importance of stressors to devise management and policy advice and for beekeepers to understand and predict varroa dynamics and effects of management interventions. We expect that scientists and stakeholders will find a variety of applications for BEEHAVE, stimulating further model development and the possible inclusion of other stressors of potential importance to honeybee colony dynamics. PMID:25598549

  20. BEEHAVE: a systems model of honeybee colony dynamics and foraging to explore multifactorial causes of colony failure.

    PubMed

    Becher, Matthias A; Grimm, Volker; Thorbek, Pernille; Horn, Juliane; Kennedy, Peter J; Osborne, Juliet L

    2014-04-01

    A notable increase in failure of managed European honeybee Apis mellifera L. colonies has been reported in various regions in recent years. Although the underlying causes remain unclear, it is likely that a combination of stressors act together, particularly varroa mites and other pathogens, forage availability and potentially pesticides. It is experimentally challenging to address causality at the colony scale when multiple factors interact. In silico experiments offer a fast and cost-effective way to begin to address these challenges and inform experiments. However, none of the published bee models combine colony dynamics with foraging patterns and varroa dynamics.We have developed a honeybee model, BEEHAVE, which integrates colony dynamics, population dynamics of the varroa mite, epidemiology of varroa-transmitted viruses and allows foragers in an agent-based foraging model to collect food from a representation of a spatially explicit landscape.We describe the model, which is freely available online (www.beehave-model.net). Extensive sensitivity analyses and tests illustrate the model's robustness and realism. Simulation experiments with various combinations of stressors demonstrate, in simplified landscape settings, the model's potential: predicting colony dynamics and potential losses with and without varroa mites under different foraging conditions and under pesticide application. We also show how mitigation measures can be tested. Synthesis and applications . BEEHAVE offers a valuable tool for researchers to design and focus field experiments, for regulators to explore the relative importance of stressors to devise management and policy advice and for beekeepers to understand and predict varroa dynamics and effects of management interventions. We expect that scientists and stakeholders will find a variety of applications for BEEHAVE, stimulating further model development and the possible inclusion of other stressors of potential importance to honeybee colony dynamics.

  1. Dynamic Energy Balance: An Integrated Framework for Discussing Diet and Physical Activity in Obesity Prevention—Is it More than Eating Less and Exercising More?

    PubMed Central

    Manore, Melinda M.; Larson-Meyer, D. Enette; Lindsay, Anne R.; Hongu, Nobuko; Houtkooper, Linda

    2017-01-01

    Understanding the dynamic nature of energy balance, and the interrelated and synergistic roles of diet and physical activity (PA) on body weight, will enable nutrition educators to be more effective in implementing obesity prevention education. Although most educators recognize that diet and PA are important for weight management, they may not fully understand their impact on energy flux and how diet alters energy expenditure and energy expenditure alters diet. Many nutrition educators have little training in exercise science; thus, they may not have the knowledge essential to understanding the benefits of PA for health or weight management beyond burning calories. This paper highlights the importance of advancing nutrition educators’ understanding about PA, and its synergistic role with diet, and the value of incorporating a dynamic energy balance approach into obesity-prevention programs. Five key points are highlighted: (1) the concept of dynamic vs. static energy balance; (2) the role of PA in weight management; (3) the role of PA in appetite regulation; (4) the concept of energy flux; and (5) the integration of dynamic energy balance into obesity prevention programs. The rationale for the importance of understanding the physiological relationship between PA and diet for effective obesity prevention programming is also reviewed. PMID:28825615

  2. Dynamic Energy Balance: An Integrated Framework for Discussing Diet and Physical Activity in Obesity Prevention-Is it More than Eating Less and Exercising More?

    PubMed

    Manore, Melinda M; Larson-Meyer, D Enette; Lindsay, Anne R; Hongu, Nobuko; Houtkooper, Linda

    2017-08-19

    Understanding the dynamic nature of energy balance, and the interrelated and synergistic roles of diet and physical activity (PA) on body weight, will enable nutrition educators to be more effective in implementing obesity prevention education. Although most educators recognize that diet and PA are important for weight management, they may not fully understand their impact on energy flux and how diet alters energy expenditure and energy expenditure alters diet. Many nutrition educators have little training in exercise science; thus, they may not have the knowledge essential to understanding the benefits of PA for health or weight management beyond burning calories. This paper highlights the importance of advancing nutrition educators' understanding about PA, and its synergistic role with diet, and the value of incorporating a dynamic energy balance approach into obesity-prevention programs. Five key points are highlighted: (1) the concept of dynamic vs. static energy balance; (2) the role of PA in weight management; (3) the role of PA in appetite regulation; (4) the concept of energy flux; and (5) the integration of dynamic energy balance into obesity prevention programs. The rationale for the importance of understanding the physiological relationship between PA and diet for effective obesity prevention programming is also reviewed.

  3. Spatial operator algebra for flexible multibody dynamics

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1993-01-01

    This paper presents an approach to modeling the dynamics of flexible multibody systems such as flexible spacecraft and limber space robotic systems. A large number of degrees of freedom and complex dynamic interactions are typical in these systems. This paper uses spatial operators to develop efficient recursive algorithms for the dynamics of these systems. This approach very efficiently manages complexity by means of a hierarchy of mathematical operations.

  4. Managers Handbook for Software Development

    NASA Technical Reports Server (NTRS)

    Agresti, W.; Mcgarry, F.; Card, D.; Page, J.; Church, V.; Werking, R.

    1984-01-01

    Methods and aids for the management of software development projects are presented. The recommendations are based on analyses and experiences with flight dynamics software development. The management aspects of organizing the project, producing a development plan, estimation costs, scheduling, staffing, preparing deliverable documents, using management tools, monitoring the project, conducting reviews, auditing, testing, and certifying are described.

  5. Economic optimisation of wildfire intervention activities

    Treesearch

    David T. Butry; Jeffrey P. Prestemon; Karen L. Abt; Ronda Sutphen

    2010-01-01

    We describe how two important tools of wildfire management, wildfire prevention education and prescribed fire for fuels management, can be coordinated to minimise the combination of management costs and expected societal losses resulting from wildland fire. We present a long-run model that accounts for the dynamics of wildfire, the effects of fuels management on...

  6. Messy world: managing dynamic landscape.

    Treesearch

    Sally Duncan

    1999-01-01

    What lessons does historical disturbance hold for the management of future landscapes? Fred Swanson, a researcher at the Pacific Northwest Research Station and John Cissel, research liaison for the Willamette NF, are members of a team of scientists and land managers who are examining the way we think about and manage landscapes.The team found that past...

  7. Knowledge management is new competitive edge.

    PubMed

    Johnson, D E

    1998-07-01

    Managing knowledge is emerging as the latest business strategy to get ahead of the competition. In the process of developing knowledge management systems, executives are increasing their awareness and understanding of organizational dynamics, collaboration, corporate learning and knowledge management technology. But Donald E.L. Johnson writes that health care executives must buy into and understand collaboration and corporate learning before they tackle knowledge management.

  8. Heterogeneous Concurrent Modeling and Design in Java (Volume 1: Introduction to Ptolemy II)

    DTIC Science & Technology

    2008-04-01

    Code 79 2.8.4. Lifecycle Management Actors 79 2.9. Domains 80 2.9.1. SDF and Multirate Systems 81 2.9.2. Data-Dependent Rates 82 2.9.3. Discrete-Event...and we added modeling capabilities for wireless systems. We also introduced lifecycle management actors and dynamically evaluated higher-order...top.setName( "DiningPhilosophers"); Manager manager = new Manager (" Manager "); top.setManager( manager ); new CSPDirector(top

  9. Dynamic resource allocation in a hierarchical multiprocessor system: A preliminary study

    NASA Technical Reports Server (NTRS)

    Ngai, Tin-Fook

    1986-01-01

    An integrated system approach to dynamic resource allocation is proposed. Some of the problems in dynamic resource allocation and the relationship of these problems to system structures are examined. A general dynamic resource allocation scheme is presented. A hierarchial system architecture which dynamically maps between processor structure and programs at multiple levels of instantiations is described. Simulation experiments were conducted to study dynamic resource allocation on the proposed system. Preliminary evaluation based on simple dynamic resource allocation algorithms indicates that with the proposed system approach, the complexity of dynamic resource management could be significantly reduced while achieving reasonable effective dynamic resource allocation.

  10. Effects of dynamic agricultural decision making in an ecohydrological model

    NASA Astrophysics Data System (ADS)

    Reichenau, T. G.; Krimly, T.; Schneider, K.

    2012-04-01

    Due to various interdependencies between the cycles of water, carbon, nitrogen, and energy the impacts of climate change on ecohydrological systems can only be investigated in an integrative way. Furthermore, the human intervention in the environmental processes makes the system even more complex. On the one hand human impact affects natural systems. On the other hand the changing natural systems have a feedback on human decision making. One of the most important examples for this kind of interaction can be found in the agricultural sector. Management dates (planting, fertilization, harvesting) are chosen based on meteorological conditions and yield expectations. A faster development of crops under a warmer climate causes shorter cropping seasons. The choice of crops depends on their profitability, which is mainly determined by market prizes, the agro-political framework, and the (climate dependent) crop yield. This study investigates these relations for the district Günzburg located in the Upper Danube catchment in southern Germany. The modeling system DANUBIA was used to perform dynamically coupled simulations of plant growth, surface and soil hydrological processes, soil nitrogen transformations, and agricultural decision making. The agro-economic model simulates decisions on management dates (based on meteorological conditions and the crops' development state), on fertilization intensities (based on yield expectations), and on choice of crops (based on profitability). The environmental models included in DANUBIA are to a great extent process based to enable its use in a climate change scenario context. Scenario model runs until 2058 were performed using an IPCC A1B forcing. In consecutive runs, dynamic crop management, dynamic crop selection, and a changing agro-political framework were activated. Effects of these model features on hydrological and ecological variables were analyzed separately by comparing the results to a model run with constant crop distribution and constant management. Results show that the influence of the modeled dynamic management adaptation on variables like transpiration, carbon uptake, or nitrate leaching from the vadose zone is stronger than the influence of a dynamic choice of crops. Climate change was found to have a stronger impact on this modeled choice of crops than the agro-political framework. These results suggest that scenario studies in areas with a large share of arable land should take into account management adaptations to changing climate.

  11. Automatic Management of Parallel and Distributed System Resources

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.

    1990-01-01

    Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.

  12. Prototype road weather performance management (RW-PM) tool and Minnesota Department of Transportation (MnDOT) field evaluation.

    DOT National Transportation Integrated Search

    2017-01-01

    FHWAs Road Weather Management Program developed a Prototype Road Weather Management (RW-PM) Tool to help DOTs maximize the effectiveness of their maintenance resources and efficiently adjust deployments dynamically, as road conditions and traffic ...

  13. From crowd modeling to safety problems. Comment on "Human behaviours in evacuation crowd dynamics: From modelling to "big data" toward crisis management" by Nicola Bellomo et al.

    NASA Astrophysics Data System (ADS)

    Elaiw, Ahmed

    2016-09-01

    Paper [3] presents a survey and a critical analysis on models of crowd dynamics derived to support crisis management related to safety problems. This is an important topic which can have an important impact on the wellbeing of our society. We are very interested in this topic as we operate in a country, Saudi Arabia, where huge crowds can be present and that stress conditions can be occasionally induced by non predictable events. In these situations the problem of crisis management is of fundamental importance.

  14. Protocol and practice in the adaptive management of waterfowl harvests

    USGS Publications Warehouse

    Johnson, F.; Williams, K.

    1999-01-01

    Waterfowl harvest management in North America, for all its success, historically has had several shortcomings, including a lack of well-defined objectives, a failure to account for uncertain management outcomes, and inefficient use of harvest regulations to understand the effects of management. To address these and other concerns, the U.S. Fish and Wildlife Service began implementation of adaptive harvest management in 1995. Harvest policies are now developed using a Markov decision process in which there is an explicit accounting for uncontrolled environmental variation, partial controllability of harvest, and structural uncertainty in waterfowl population dynamics. Current policies are passively adaptive, in the sense that any reduction in structural uncertainty is an unplanned by-product of the regulatory process. A generalization of the Markov decision process permits the calculation of optimal actively adaptive policies, but it is not yet clear how state-specific harvest actions differ between passive and active approaches. The Markov decision process also provides managers the ability to explore optimal levels of aggregation or "management scale" for regulating harvests in a system that exhibits high temporal, spatial, and organizational variability. Progress in institutionalizing adaptive harvest management has been remarkable, but some managers still perceive the process as a panacea, while failing to appreciate the challenges presented by this more explicit and methodical approach to harvest regulation. Technical hurdles include the need to develop better linkages between population processes and the dynamics of landscapes, and to model the dynamics of structural uncertainty in a more comprehensive fashion. From an institutional perspective, agreement on how to value and allocate harvests continues to be elusive, and there is some evidence that waterfowl managers have overestimated the importance of achievement-oriented factors in setting hunting regulations. Indeed, it is these unresolved value judgements, and the lack of an effective structure for organizing debate, that present the greatest threat to adaptive harvest management as a viable means for coping with management uncertainty. Copyright ?? 1999 by The Resilience Alliance.

  15. Strategies for sustainable management of renewable resources during environmental change.

    PubMed

    Lindkvist, Emilie; Ekeberg, Örjan; Norberg, Jon

    2017-03-15

    As a consequence of global environmental change, management strategies that can deal with unexpected change in resource dynamics are becoming increasingly important. In this paper we undertake a novel approach to studying resource growth problems using a computational form of adaptive management to find optimal strategies for prevalent natural resource management dilemmas. We scrutinize adaptive management, or learning-by-doing, to better understand how to simultaneously manage and learn about a system when its dynamics are unknown. We study important trade-offs in decision-making with respect to choosing optimal actions (harvest efforts) for sustainable management during change. This is operationalized through an artificially intelligent model where we analyze how different trends and fluctuations in growth rates of a renewable resource affect the performance of different management strategies. Our results show that the optimal strategy for managing resources with declining growth is capable of managing resources with fluctuating or increasing growth at a negligible cost, creating in a management strategy that is both efficient and robust towards future unknown changes. To obtain this strategy, adaptive management should strive for: high learning rates to new knowledge, high valuation of future outcomes and modest exploration around what is perceived as the optimal action. © 2017 The Author(s).

  16. Charting Multidisciplinary Team External Dynamics Using a Systems Thinking Approach

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois; Waszak, Martin R.; Jones, Kenneth M.; Silcox, Richard J.; Silva, Walter A.; Nowaczyk, Ronald H.

    1998-01-01

    Using the formalism provided by the Systems Thinking approach, the dynamics present when operating multidisciplinary teams are examined in the context of the NASA Langley Research and Technology Group, an R&D organization organized along functional lines. The paper focuses on external dynamics and examines how an organization creates and nurtures the teams and how it disseminates and retains the lessons and expertise created by the multidisciplinary activities. Key variables are selected and the causal relationships between the variables are identified. Five "stories" are told, each of which touches on a different aspect of the dynamics. The Systems Thinking Approach provides recommendations as to interventions that will facilitate the introduction of multidisciplinary teams and that therefore will increase the likelihood of performing successful multidisciplinary developments. These interventions can be carried out either by individual researchers, line management or program management.

  17. Improving DLA Supply Chain Agility: Lead Times, Order Quantities, and Information Flow

    DTIC Science & Technology

    2015-01-01

    effective inventory management. The dynamism of supply appears to be less than demand, with major problems from supply-side vola- tility not apparent...presents a challenge for efficient and effective inventory management. The dynamism of supply appears to be much less, with major problems from...a vehicle safety problem that newly appears. Or the lead time for the customer change of plan may be less than the lead time to procure the item. So

  18. Review of dynamic optimization methods in renewable natural resource management

    USGS Publications Warehouse

    Williams, B.K.

    1989-01-01

    In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.

  19. Proceedings: Shrubland ecosystem dynamics in a changing environment

    Treesearch

    Jerry R. Barrow; E. Durant McArthur; Ronald E. Sosebee; Robin J. Tausch

    1996-01-01

    This proceedings contains 50 papers including an overview of shrubland ecosystem dynamics in a changing environment and several papers each on vegetation dynamics, management concerns and options, and plant ecophysiology as well as an account of a Jornada Basin field trip. Contributions emphasize the impact of changing environmental conditions on vegetative composition...

  20. Role Management, Educational Satisfaction, and Role Dynamics in Post-Secondary, Re-entry Women.

    ERIC Educational Resources Information Center

    Edmondon, Mary Ellen; And Others

    1986-01-01

    A sample of 42 post-secondary, educational re-entry women completed questionnaires focusing on background status, role dynamics, and satisfaction with their re-entry experience. Results showed no differences between students in a vocational program and those in a traditional, academic program. Role-dynamic variables--but not background-status…

  1. Dynamic Density: An Air Traffic Management Metric

    NASA Technical Reports Server (NTRS)

    Laudeman, I. V.; Shelden, S. G.; Branstrom, R.; Brasil, C. L.

    1998-01-01

    The definition of a metric of air traffic controller workload based on air traffic characteristics is essential to the development of both air traffic management automation and air traffic procedures. Dynamic density is a proposed concept for a metric that includes both traffic density (a count of aircraft in a volume of airspace) and traffic complexity (a measure of the complexity of the air traffic in a volume of airspace). It was hypothesized that a metric that includes terms that capture air traffic complexity will be a better measure of air traffic controller workload than current measures based only on traffic density. A weighted linear dynamic density function was developed and validated operationally. The proposed dynamic density function includes a traffic density term and eight traffic complexity terms. A unit-weighted dynamic density function was able to account for an average of 22% of the variance in observed controller activity not accounted for by traffic density alone. A comparative analysis of unit weights, subjective weights, and regression weights for the terms in the dynamic density equation was conducted. The best predictor of controller activity was the dynamic density equation with regression-weighted complexity terms.

  2. Time is money: Rational life cycle inertia and the delegation of investment management.

    PubMed

    Kim, Hugh Hoikwang; Maurer, Raimond; Mitchell, Olivia S

    2016-08-01

    Many households display inertia in investment management over their life cycles. Our calibrated dynamic life cycle portfolio choice model can account for such an apparently 'irrational' outcome, by incorporating the fact that investors must forgo acquiring job-specific skills when they spend time managing their money, and their efficiency in financial decision making varies with age. Resulting inertia patterns mesh well with findings from prior studies and our own empirical results from Panel Study of Income Dynamics (PSID) data. We also analyze how people optimally choose between actively managing their assets versus delegating the task to financial advisors. Delegation proves valuable to both the young and the old. Our calibrated model quantifies welfare gains from including investment time and money costs as well as delegation in a life cycle setting.

  3. Impact of transportation demand management (TDM) elements on managed lanes toll prices : [summary].

    DOT National Transportation Integrated Search

    2015-04-01

    The 95 Express in Miami, Florida, is a set of dynamically tolled, managed lanes on I-95. : Single occupant vehicles must pay a toll to use 95 Express, but registered carpools, vanpools, : motorcycles, inherently low emission vehicles (ILEV; generally...

  4. Active Traffic Management: Comprehension, Legibility, Distance, and Motorist Behavior In Response to Selected Variable Speed Limit and Lane Control Signing

    DOT National Transportation Integrated Search

    2016-06-01

    Active traffic management (ATM) incorporates a collection of strategies allowing the dynamic management of recurrent and nonrecurrent congestion based on prevailing traffic conditions. These strategies help to increase peak capacity, smooth traffic f...

  5. 49 CFR 238.403 - Crash energy management.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... crash energy management system to dissipate kinetic energy during a collision. The crash energy management system shall provide a controlled deformation and collapse of designated sections within the... resulting from dynamic forces transmitted to occupied volumes. (b) The design of each unit shall consist of...

  6. Implementing optimal thinning strategies

    Treesearch

    Kurt H. Riitters; J. Douglas Brodie

    1984-01-01

    Optimal thinning regimes for achieving several management objectives were derived from two stand-growth simulators by dynamic programming. Residual mean tree volumes were then plotted against stand density management diagrams. The results supported the use of density management diagrams for comparing, checking, and implementing the results of optimization analyses....

  7. Organizational Learning and Crisis Management

    ERIC Educational Resources Information Center

    Wang, Jia

    2007-01-01

    The impact of crises on organizations has been stronger than ever. This article explores the role of organizational learning in crisis management, an area that has received little attention from HRD community. Recognizing the dynamics and interconnectedness of crisis management, organizational learning, and organizational change, the article…

  8. Climate change and long-term fire management impacts on Australian savannas.

    PubMed

    Scheiter, Simon; Higgins, Steven I; Beringer, Jason; Hutley, Lindsay B

    2015-02-01

    Tropical savannas cover a large proportion of the Earth's land surface and many people are dependent on the ecosystem services that savannas supply. Their sustainable management is crucial. Owing to the complexity of savanna vegetation dynamics, climate change and land use impacts on savannas are highly uncertain. We used a dynamic vegetation model, the adaptive dynamic global vegetation model (aDGVM), to project how climate change and fire management might influence future vegetation in northern Australian savannas. Under future climate conditions, vegetation can store more carbon than under ambient conditions. Changes in rainfall seasonality influence future carbon storage but do not turn vegetation into a carbon source, suggesting that CO₂ fertilization is the main driver of vegetation change. The application of prescribed fires with varying return intervals and burning season influences vegetation and fire impacts. Carbon sequestration is maximized with early dry season fires and long fire return intervals, while grass productivity is maximized with late dry season fires and intermediate fire return intervals. The study has implications for management policy across Australian savannas because it identifies how fire management strategies may influence grazing yield, carbon sequestration and greenhouse gas emissions. This knowledge is crucial to maintaining important ecosystem services of Australian savannas. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  9. System Dynamics

    NASA Astrophysics Data System (ADS)

    Morecroft, John

    System dynamics is an approach for thinking about and simulating situations and organisations of all kinds and sizes by visualising how the elements fit together, interact and change over time. This chapter, written by John Morecroft, describes modern system dynamics which retains the fundamentals developed in the 1950s by Jay W. Forrester of the MIT Sloan School of Management. It looks at feedback loops and time delays that affect system behaviour in a non-linear way, and illustrates how dynamic behaviour depends upon feedback loop structures. It also recognises improvements as part of the ongoing process of managing a situation in order to achieve goals. Significantly it recognises the importance of context, and practitioner skills. Feedback systems thinking views problems and solutions as being intertwined. The main concepts and tools: feedback structure and behaviour, causal loop diagrams, dynamics, are practically illustrated in a wide variety of contexts from a hot water shower through to a symphony orchestra and the practical application of the approach is described through several real examples of its use for strategic planning and evaluation.

  10. Research in Structures and Dynamics, 1984

    NASA Technical Reports Server (NTRS)

    Hayduk, R. J. (Compiler); Noor, A. K. (Compiler)

    1984-01-01

    A symposium on advanced and trends in structures and dynamics was held to communicate new insights into physical behavior and to identify trends in the solution procedures for structures and dynamics problems. Pertinent areas of concern were (1) multiprocessors, parallel computation, and database management systems, (2) advances in finite element technology, (3) interactive computing and optimization, (4) mechanics of materials, (5) structural stability, (6) dynamic response of structures, and (7) advanced computer applications.

  11. Managing Tipping Point Dynamics in Single Development Projects

    DTIC Science & Technology

    2006-04-30

    Journal of Product Innovation Management , 22(2005), 177-192. Lyneis, F., Cooper, K., & Els, S. (2001). Strategic management of complex projects: A case...new product development. Journal of Product Innovation Management ,18(2001), 265-300. Richardson, G.P., & Pugh, A.L. (1981). Introduction to... Product Innovation Management , 17(2000), 128- 142. United States General Accounting Office (USGAO). (1996). Department of Energy: Opportunity to improve

  12. Do impression management and self-deception distort self-report measures with content of dynamic risk factors in offender samples? A meta-analytic review.

    PubMed

    Hildebrand, Martin; Wibbelink, Carlijn J M; Verschuere, Bruno

    Self-report measures provide an important source of information in correctional/forensic settings, yet at the same time the validity of that information is often questioned because self-reports are thought to be highly vulnerable to self-presentation biases. Primary studies in offender samples have provided mixed results with regard to the impact of socially desirable responding on self-reports. The main aim of the current study was therefore to investigate-via a meta-analytic review of published studies-the association between the two dimensions of socially desirable responding, impression management and self-deceptive enhancement, and self-report measures with content of dynamic risk factors using the Balanced Inventory of Desirable Responding (BIDR) in offender samples. These self-report measures were significantly and negatively related with self-deception (r = -0.120, p < 0.001; k = 170 effect sizes) and impression management (r = -0.158, p < 0.001; k = 157 effect sizes), yet there was evidence of publication bias for the impression management effect with the trim and fill method indicating that the relation is probably even smaller (r = -0.07). The magnitude of the effect sizes was small. Moderation analyses suggested that type of dynamic risk factor (e.g., antisocial cognition versus antisocial personality), incentives, and publication year affected the relationship between impression management and self-report measures with content of dynamic risk factors, whereas sample size, setting (e.g., incarcerated, community), and publication year influenced the relation between self-deception and these self-report measures. The results indicate that the use of self-report measures to assess dynamic risk factors in correctional/forensic settings is not inevitably compromised by socially desirable responding, yet caution is warranted for some risk factors (antisocial personality traits), particularly when incentives are at play. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Ecosystem Management With Multiple Owners: Landscape Dynamics in a Southern Appalachian Watershed

    Treesearch

    David N. Wear; Monica G. Turner; Richard O. Flamm

    1996-01-01

    Ecosystem management is emerging as an organizing theme for land use and resource management in the United States. However, while this subject is dominating professional and policy discourse, little research has examined how such system-level goals might be formulated and implemented. Effective ecosystem management will require insights into the functioning of...

  14. Model for multi-stand management based on structural attributes of individual stands

    Treesearch

    G.W. Miller; J. Sullivan

    1997-01-01

    A growing interest in managing forest ecosystems calls for decision models that take into account attribute goals for large forest areas while continuing to recognize the individual stand as a basic unit of forest management. A dynamic, nonlinear forest management model is described that schedules silvicultural treatments for individual stands that are linked by multi-...

  15. Linking Governance to Sustainable Management Outcomes: Applying Dynamic Indicator Profiles to River Basin Organization Case Studies around the World.

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Bouckaert, F. W.

    2017-12-01

    Institutional best practice for integrated river basin management advocates the river basin organisation (RBO) model as pivotal to achieve sustainable management outcomes and stakeholder engagement. The model has been widely practiced in transboundary settings and is increasingly adopted at national scales, though its effectiveness remains poorly studied. A meta-analysis of four river basins has been conducted to assess governance models and linking it to evaluation of biophysical management outcomes. The analysis is based on a Theory of Change framework, and includes functional dynamic governance indicator profiles, coupled to sustainable ecosystem management outcome profiles. The governance and outcome profiles, informed by context specific indicators, demand that targets for setting objectives are required in multiple dimensions, and trajectory outlines are a useful tool to track progress along the journey mapped out by the Theory of Change framework. Priorities, trade-offs and objectives vary in each basin, but the diagnostics tool allows comparison between basins in their capacity to reach targets through successive evaluations. The distance between capacity and target scores determines how program planning should be prioritized and resources allocated for implementation; this is a dynamic process requiring regular evaluations and adaptive management. The findings of this study provide a conceptual framework for combining dimensions of integrated water management principles that bridge tensions between (i) stakeholder engagement and participatory management (bottom-up approach) using localized knowledge and (ii) decision-making, control-and-command, system-scale, accountable and equitable management (top-down approach).The notion of adaptive management is broadened to include whole-of-program learnings, rather than single hypothesis based learning adjustments. This triple loop learning combines exploitative methods refinement with explorative evaluation of underlying paradigms. The significance of these findings suggests that in order to achieve effective management outcomes, a framework is required that combines governance performance with evaluations of bio-physical outcomes.

  16. Reconstructing past fire regimes: methods, applications, and relevance to fire management and conservation

    NASA Astrophysics Data System (ADS)

    Conedera, Marco; Tinner, Willy; Neff, Christophe; Meurer, Manfred; Dickens, Angela F.; Krebs, Patrik

    2009-03-01

    Biomass burning and resulting fire regimes are major drivers of vegetation changes and of ecosystem dynamics. Understanding past fire dynamics and their relationship to these factors is thus a key factor in preserving and managing present biodiversity and ecosystem functions. Unfortunately, our understanding of the disturbance dynamics of past fires is incomplete, and many open questions exist relevant to these concepts and the related methods. In this paper we describe the present status of the fire-regime concept, discuss the notion of the fire continuum and related proxies, and review the most important existing approaches for reconstructing fire history at centennial to millennial scales. We conclude with a short discussion of selected directions for future research that may lead to a better understanding of past fire-regime dynamics. In particular, we suggest that emphasis should be laid on (1) discriminating natural from anthropogenic fire-regime types, (2) improving combined analysis of fire and vegetation reconstructions to study long-term fire ecology, and (3) overcoming problems in defining temporal and spatial scales of reference, which would allow better use of past records to gain important insights for landscape, fire and forest management.

  17. Technology Management within Product Lines in High Technology Markets

    ERIC Educational Resources Information Center

    Sarangee, Kumar R.

    2009-01-01

    Understanding the nuances of product line management has been of great interest to business scholars and practitioners. This assumes greater significance for firms conducting business in technologically dynamic industries, where they face certain challenges regarding the management of multiple, overlapping technologies within their product lines.…

  18. Managing for forage and grazingland resilience to maintain enterprise resilience in the Northern Great Plains of the US

    USDA-ARS?s Scientific Manuscript database

    Maintaining grazingland and enterprise resilience under changing climatic and economic conditions requires novel, resilience based, management strategies. State and Transition models provide a solid foundation and framework for management of grazinglands using non-equilibrium dynamics. These models ...

  19. Data driven weed management: Tracking herbicide resistance at the landscape scale

    USDA-ARS?s Scientific Manuscript database

    Limiting the prevalence of herbicide resistant (HR) weeds requires consistent management implementation across space and time. Although weed population dynamics operate at scales above farm-level, the emergent effect of neighboring management decisions on in-field weed densities and the spread of re...

  20. Urban Pest Management of Ants in California

    USDA-ARS?s Scientific Manuscript database

    Keeping pace with the dynamic and evolving landscape of invasive ants in California presents a formidable challenge to the pest management industry. Pest management professionals (PMPs) are on the frontlines when it comes to battling these exotic ant pests, and are often the first ones to intercept ...

  1. Governance and management dynamics of landscape restoration at multiple scales: Learning from successful environmental managers in Sweden.

    PubMed

    Dawson, Lucas; Elbakidze, Marine; Angelstam, Per; Gordon, Johanna

    2017-07-15

    Due to a long history of intensive land and water use, habitat networks for biodiversity conservation are generally degraded in Sweden. Landscape restoration (LR) is an important strategy for achieving representative and functional green infrastructures. However, outcomes of LR efforts are poorly studied, particularly the dynamics of LR governance and management. We apply systems thinking methods to a series of LR case studies to analyse the causal structures underlying LR governance and management in Sweden. We show that these structures appear to comprise of an interlinked system of at least three sets of drivers and four core processes. This system exhibits many characteristics of a transformative change towards an integrated, adaptive approach to governance and management. Key challenges for Swedish LR projects relate to institutional and regulatory flexibility, the timely availability of sufficient funds, and the management of learning and knowledge production processes. In response, successful project leaders develop several key strategies to manage complexity and risk, and enhance perceptions of the attractiveness of LR projects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Dynamic visualisation of municipal waste management performance in the EU using Ternary Diagram method.

    PubMed

    Pomberger, R; Sarc, R; Lorber, K E

    2017-03-01

    This contribution describes the dynamic visualisation of European (EU 28) municipal waste management performance, using the Ternary Diagram Method. Municipal waste management performance depends primarily on three treatment categories: recycling & composting, incineration and landfilling. The framework of current municipal waste management including recycling targets, etc. is given by the Waste Framework Directive - 2008/98/EC. The proposed Circular Economy Package should stimulate Europe's transition towards more sustainable resources and energy oriented waste management. The Package also includes a revised legislative proposal on waste that sets ambitious recycling rates for municipal waste for 2025 (60%) and 2030 (65%). Additionally, the new calculation method for monitoring the attainment of the targets should be applied. In 2014, ca. 240 million tonnes of municipal waste were generated in the EU. While in 1995, 17% were recycled and composted, 14% incinerated and 64% landfilled, in 2014 ca. 71% were recovered but 28% landfilled only. Considering the treatment performance of the individual EU member states, the EU 28 can be divided into three groups, namely: "Recovery Countries", "Transition Countries" and "Landfilling Countries". Using Ternary Diagram Method, three types of visualization for the municipal waste management performance have been investigated and extensively described. Therefore, for better understanding of municipal waste management performance in the last 20years, dynamic visualisation of the Eurostat table-form data on all 28 member states of the EU has been carried out in three different ways: 1. "Performance Positioning" of waste management unit(s) at a specific date; 2. "Performance dynamics" over a certain time period and; 3. "Performance development" expressed as a track(s). Results obtained show that the Ternary Diagram Method is very well suited to be used for better understanding of past developments and coherences, for monitoring of current situations and prognosis of future paths. One of the interesting coherences shown by the method is the linked development of recycling & composting (60-65%) with incineration (40-35%) performance over the last 20years in the EU 28. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Modeling social crowds. Comment on "Human behaviours in evacuation crowd dynamics: From modelling to "big data" toward crisis management" by Nicola Bellomo et al.

    NASA Astrophysics Data System (ADS)

    Poyato, David; Soler, Juan

    2016-09-01

    The study of human behavior is a complex task, but modeling some aspects of this behavior is an even more complicated and exciting idea. From crisis management to decision making in evacuation protocols, understanding the complexity of humans in stress situations is more and more demanded in our society by obvious reasons [5,6,8,12]. In this context, [4] deals with crowd dynamics with special attention to evacuation.

  4. Dynamic Control of Radiative Heat Transfer with Tunable Materials for Thermal Management in Both Far and Near Fields

    NASA Astrophysics Data System (ADS)

    Yang, Yue

    The proposed research mainly focuses on employing tunable materials to achieve dynamic control of radiative heat transfer in both far and near fields for thermal management. Vanadium dioxide (VO2), which undergoes a phase transition from insulator to metal at the temperature of 341 K, is one tunable material being applied. The other one is graphene, whose optical properties can be tuned by chemical potential through external bias or chemical doping. (Abstract shortened by ProQuest.).

  5. The use of hydro-dynamic models in the practice-oriented education of engineering students

    NASA Astrophysics Data System (ADS)

    Sziebert, J.; Zellei, L.; Tamás, E. A.

    2009-04-01

    Management tasks related to open channel flows became rather comprehensive and multi-disciplinary, particularly with the predominancy of nature management aspects. The water regime of our rivers has proven to reach extremities more and more frequently in the past decades. In order to develop and analyse alternative solutions and to handle and resolve conflicts of interests, we apply 1D hydro-dynamic models in education for the explanation of processes and to improve practical skills of our students.

  6. Comparing models of Red Knot population dynamics

    USGS Publications Warehouse

    McGowan, Conor P.

    2015-01-01

    Predictive population modeling contributes to our basic scientific understanding of population dynamics, but can also inform management decisions by evaluating alternative actions in virtual environments. Quantitative models mathematically reflect scientific hypotheses about how a system functions. In Delaware Bay, mid-Atlantic Coast, USA, to more effectively manage horseshoe crab (Limulus polyphemus) harvests and protect Red Knot (Calidris canutus rufa) populations, models are used to compare harvest actions and predict the impacts on crab and knot populations. Management has been chiefly driven by the core hypothesis that horseshoe crab egg abundance governs the survival and reproduction of migrating Red Knots that stopover in the Bay during spring migration. However, recently, hypotheses proposing that knot dynamics are governed by cyclical lemming dynamics garnered some support in data analyses. In this paper, I present alternative models of Red Knot population dynamics to reflect alternative hypotheses. Using 2 models with different lemming population cycle lengths and 2 models with different horseshoe crab effects, I project the knot population into the future under environmental stochasticity and parametric uncertainty with each model. I then compare each model's predictions to 10 yr of population monitoring from Delaware Bay. Using Bayes' theorem and model weight updating, models can accrue weight or support for one or another hypothesis of population dynamics. With 4 models of Red Knot population dynamics and only 10 yr of data, no hypothesis clearly predicted population count data better than another. The collapsed lemming cycle model performed best, accruing ~35% of the model weight, followed closely by the horseshoe crab egg abundance model, which accrued ~30% of the weight. The models that predicted no decline or stable populations (i.e. the 4-yr lemming cycle model and the weak horseshoe crab effect model) were the most weakly supported.

  7. Games of corruption in preventing the overuse of common-pool resources.

    PubMed

    Lee, Joung-Hun; Jusup, Marko; Iwasa, Yoh

    2017-09-07

    Maintaining human cooperation in the context of common-pool resource management is extremely important because otherwise we risk overuse and corruption. To analyse the interplay between economic and ecological factors leading to corruption, we couple the resource dynamics and the evolutionary dynamics of strategic decision making into a powerful analytical framework. The traits of this framework are: (i) an arbitrary number of harvesters share the responsibility to sustainably exploit a specific part of an ecosystem, (ii) harvesters face three strategic choices for exploiting the resource, (iii) a delegated enforcement system is available if called upon, (iv) enforcers are either honest or corrupt, and (v) the resource abundance reflects the choice of harvesting strategies. The resulting dynamical system is bistable; depending on the initial conditions, it evolves either to cooperative (sustainable exploitation) or defecting (overexploitation) equilibria. Using the domain of attraction to cooperative equilibria as an indicator of successful management, we find that the more resilient the resource (as implied by a high growth rate), the more likely the dominance of corruption which, in turn, suppresses the cooperative outcome. A qualitatively similar result arises when slow resource dynamics relative to the dynamics of decision making mask the benefit of cooperation. We discuss the implications of these results in the context of managing common-pool resources. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Combined effects of climate and land management on watershed vegetation dynamics in an arid environment

    Treesearch

    Peilong Liu; Lu Hao; Cen Pan; Decheng Zhou; Yongqiang Liu; Ge Sun

    2017-01-01

    Leaf area index (LAI) is a key parameter to characterize vegetation dynamics and ecosystemstructure that determines the ecosystem functions and services such as cleanwater supply and carbon sequestration in awatershed. However, linking LAI dynamics and environmental controls (i.e., coupling biosphere, atmosphere, and anthroposphere) remains challenging and such type of...

  9. Wetland fire scar monitoring and analysis using archival Landsat data for the Everglades

    USGS Publications Warehouse

    Jones, John W.; Hall, Annette E.; Foster, Ann M.; Smith, Thomas J.

    2013-01-01

    The ability to document the frequency, extent, and severity of fires in wetlands, as well as the dynamics of post-fire wetland land cover, informs fire and wetland science, resource management, and ecosystem protection. Available information on Everglades burn history has been based on field data collection methods that evolved through time and differ by land management unit. Our objectives were to (1) design and test broadly applicable and repeatable metrics of not only fire scar delineation but also post-fire land cover dynamics through exhaustive use of the Landsat satellite data archives, and then (2) explore how those metrics relate to various hydrologic and anthropogenic factors that may influence post-fire land cover dynamics. Visual interpretation of every Landsat scene collected over the study region during the study time frame produced a new, detailed database of burn scars greater than 1.6 ha in size in the Water Conservation Areas and post-fire land cover dynamics for Everglades National Park fires greater than 1.6 ha in area. Median burn areas were compared across several landscape units of the Greater Everglades and found to differ as a function of administrative unit and fire history. Some burned areas transitioned to open water, exhibiting water depths and dynamics that support transition mechanisms proposed in the literature. Classification tree techniques showed that time to green-up and return to pre-burn character were largely explained by fire management practices and hydrology. Broadly applicable as they use data from the global, nearly 30-year-old Landsat archive, these methods for documenting wetland burn extent and post-fire land cover change enable cost-effective collection of new data on wetland fire ecology and independent assessment of fire management practice effectiveness.

  10. Simulating soil phosphorus dynamics for a phosphorus loss quantification tool.

    PubMed

    Vadas, Peter A; Joern, Brad C; Moore, Philip A

    2012-01-01

    Pollution of fresh waters by agricultural phosphorus (P) is a water quality concern. Because soils can contribute significantly to P loss in runoff, it is important to assess how management affects soil P status over time, which is often done with models. Our objective was to describe and validate soil P dynamics in the Annual P Loss Estimator (APLE) model. APLE is a user-friendly spreadsheet model that simulates P loss in runoff and soil P dynamics over 10 yr for a given set of runoff, erosion, and management conditions. For soil P dynamics, APLE simulates two layers in the topsoil, each with three inorganic P pools and one organic P pool. It simulates P additions to soil from manure and fertilizer, distribution among pools, mixing between layers due to tillage and bioturbation, leaching between and out of layers, crop P removal, and loss by surface runoff and erosion. We used soil P data from 25 published studies to validate APLE's soil P processes. Our results show that APLE reliably simulated soil P dynamics for a wide range of soil properties, soil depths, P application sources and rates, durations, soil P contents, and management practices. We validated APLE specifically for situations where soil P was increasing from excessive P inputs, where soil P was decreasing due to greater outputs than inputs, and where soil P stratification occurred in no-till and pasture soils. Successful simulations demonstrate APLE's potential to be applied to major management scenarios related to soil P loss in runoff and erosion. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  11. Is Earth F**ked? Dynamical Futility of Global Environmental Management and Possibilities for Sustainability via Direct Action Activism

    NASA Astrophysics Data System (ADS)

    wErnEr, B.

    2012-12-01

    Environmental challenges are dynamically generated within the dominant global culture principally by the mismatch between short-time-scale market and political forces driving resource extraction/use and longer-time-scale accommodations of the Earth system to these changes. Increasing resource demand is leading to the development of two-way, nonlinear interactions between human societies and environmental systems that are becoming global in extent, either through globalized markets and other institutions or through coupling to global environmental systems such as climate. These trends are further intensified by dissipation-reducing technological advances in transactions, communication and transport, which suppress emergence of longer-time-scale economic and political levels of description and facilitate long-distance connections, and by predictive environmental modeling, which strengthens human connections to a short-time-scale virtual Earth, and weakens connections to the longer time scales of the actual Earth. Environmental management seeks to steer fast scale economic and political interests of a coupled human-environmental system towards longer-time-scale consideration of benefits and costs by operating within the confines of the dominant culture using a linear, engineering-type connection to the system. Perhaps as evidenced by widespread inability to meaningfully address such global environmental challenges as climate change and soil degradation, nonlinear connections reduce the ability of managers to operate outside coupled human-environmental systems, decreasing their effectiveness in steering towards sustainable interactions and resulting in managers slaved to short-to-intermediate-term interests. In sum, the dynamics of the global coupled human-environmental system within the dominant culture precludes management for stable, sustainable pathways and promotes instability. Environmental direct action, resistance taken from outside the dominant culture, as in protests, blockades and sabotage by indigenous peoples, workers, anarchists and other activist groups, increases dissipation within the coupled system over fast to intermediate scales and pushes for changes in the dominant culture that favor transition to a stable, sustainable attractor. These dynamical relationships are illustrated and explored using a numerical model that simulates the short-, intermediate- and long-time-scale dynamics of the coupled human-environmental system. At fast scales, economic and political interests exploit environmental resources through a maze of environmental management and resistance, guided by virtual Earth predictions. At intermediate scales, managers become slaved to economic and political interests, which adapt to and repress resistance, and resistance is guided by patterns of environmental destruction. At slow scales, resistance interacts with the cultural context, which co-evolves with the environment. The transition from unstable dynamics to sustainability is sensitively dependent on the level of participation in and repression of resistance. Because of their differing impact inside and outside the dominant culture, virtual Earth predictions can either promote or oppose sustainability. Supported by the National Science Foundation, Geomorphology and Land Use Dynamics Program.

  12. THE DYNAMIC REGIME CONCEPT FOR ECOSYSTEM MANAGEMENT AND RESTORATION

    EPA Science Inventory

    Dynamic regimes of ecosystems are multidimensional basis of attraction, characterized by particular species communities and ecosystems processes. Ecosystem patterns and processes rarely respond linerarly to disturbances, and the nonlinear cynamic regime concept offers a more real...

  13. Resilience, Integrity and Ecosystem Dynamics: Bridging Ecosystem Theory and Management

    NASA Astrophysics Data System (ADS)

    Müller, Felix; Burkhard, Benjamin; Kroll, Franziska

    In this paper different approaches to elucidate ecosystem dynamics are described, illustrated and interrelated. Ecosystem development is distinguished into two separate sequences, a complexifying phase which is characterized by orientor optimization and a destruction based phase which follows disturbances. The two developmental pathways are integrated in a modified illustration of the "adaptive cycle". Based on these fundamentals, the recent definitions of resilience, adaptability and vulnerability are discussed and a modified comprehension is proposed. Thereafter, two case studies about wetland dynamics are presented to demonstrate both, the consequences of disturbance and the potential of ecosystem recovery. In both examples ecosystem integrity is used as a key indicator variable. Based on the presented results the relativity and the normative loading of resilience quantification is worked out. The paper ends with the suggestion that the features of adaptability could be used as an integrative guideline for the analysis of ecosystem dynamics and as a well-suited concept for ecosystem management.

  14. Modelling the dynamics of feral alfalfa populations and its management implications.

    PubMed

    Bagavathiannan, Muthukumar V; Begg, Graham S; Gulden, Robert H; Van Acker, Rene C

    2012-01-01

    Feral populations of cultivated crops can pose challenges to novel trait confinement within agricultural landscapes. Simulation models can be helpful in investigating the underlying dynamics of feral populations and determining suitable management options. We developed a stage-structured matrix population model for roadside feral alfalfa populations occurring in southern Manitoba, Canada. The model accounted for the existence of density-dependence and recruitment subsidy in feral populations. We used the model to investigate the long-term dynamics of feral alfalfa populations, and to evaluate the effectiveness of simulated management strategies such as herbicide application and mowing in controlling feral alfalfa. Results suggest that alfalfa populations occurring in roadside habitats can be persistent and less likely to go extinct under current roadverge management scenarios. Management attempts focused on controlling adult plants alone can be counterproductive due to the presence of density-dependent effects. Targeted herbicide application, which can achieve complete control of seedlings, rosettes and established plants, will be an effective strategy, but the seedbank population may contribute to new recruits. In regions where roadside mowing is regularly practiced, devising a timely mowing strategy (early- to mid-August for southern Manitoba), one that can totally prevent seed production, will be a feasible option for managing feral alfalfa populations. Feral alfalfa populations can be persistent in roadside habitats. Timely mowing or regular targeted herbicide application will be effective in managing feral alfalfa populations and limit feral-population-mediated gene flow in alfalfa. However, in the context of novel trait confinement, the extent to which feral alfalfa populations need to be managed will be dictated by the tolerance levels established by specific production systems for specific traits. The modelling framework outlined in this paper could be applied to other perennial herbaceous plants with similar life-history characteristics.

  15. The dynamics of social-ecological systems in urban landscapes: Stockholm and the National Urban Park, Sweden.

    PubMed

    Elmqvist, T; Colding, J; Barthel, S; Borgstrom, S; Duit, A; Lundberg, J; Andersson, E; Ahrné, K; Ernstson, H; Folke, C; Bengtsson, J

    2004-06-01

    This study addresses social-ecological dynamics in the greater metropolitan area of Stockholm County, Sweden, with special focus on the National Urban Park (NUP). It is part of the Millennium Ecosystem Assessment (MA) and has the following specific objectives: (1) to provide scientific information on biodiversity patterns, ecosystem dynamics, and ecosystem services generated; (2) to map interplay between actors and institutions involved in management of ecosystem services; and (3) to identify strategies for strengthening social-ecological resilience. The green areas in Stockholm County deliver numerous ecosystem services, for example, air filtration, regulation of microclimate, noise reduction, surface water drainage, recreational and cultural values, nutrient retention, and pollination and seed dispersal. Recreation is among the most important services and NUP, for example, has more than 15 million visitors per year. More than 65 organizations representing 175,000 members are involved in management of ecosystem services. However, because of population increase and urban growth during the last three decades, the region displays a quite dramatic loss of green areas and biodiversity. An important future focus is how management may reduce increasing isolation of urban green areas and enhance connectivity. Comanagement should be considered where locally managed green space may function as buffer zones and for management of weak links that connect larger green areas; for example, there are three such areas around NUP identified. Preliminary results indicate that areas of informal management represent centers on which to base adaptive comanagement, with the potential to strengthen biodiversity management and resilience in the landscape.

  16. [Design of medical devices management system supporting full life-cycle process management].

    PubMed

    Su, Peng; Zhong, Jianping

    2014-03-01

    Based on the analysis of the present status of medical devices management, this paper optimized management process, developed a medical devices management system with Web technologies. With information technology to dynamic master the use of state of the entire life-cycle of medical devices. Through the closed-loop management with pre-event budget, mid-event control and after-event analysis, improved the delicacy management level of medical devices, optimized asset allocation, promoted positive operation of devices.

  17. Classroom Management Challenges in the Dance Class

    ERIC Educational Resources Information Center

    Clark, Dawn

    2007-01-01

    Teaching dance can be a challenge because of the unique classroom-management situations that arise from the dynamic nature of the class content. Management is a delicate navigation of advance planning; rule setting; the establishment and implementation of daily protocols, routines, and interventions; and the teacher's own presentation. This…

  18. Saving Face: Managing Rapport in a Problem-Based Learning Group

    ERIC Educational Resources Information Center

    Robinson, Leslie; Harris, Ann; Burton, Rob

    2015-01-01

    This qualitative study investigated the complex social aspects of communication required for students to participate effectively in Problem-Based Learning and explored how these dynamics are managed. The longitudinal study of a group of first-year undergraduates examined interactions using Rapport Management as a framework to analyse communication…

  19. Extend Instruction outside the Classroom: Take Advantage of Your Learning Management System

    ERIC Educational Resources Information Center

    Jensen, Lauren A.

    2010-01-01

    Numerous institutions of higher education have implemented a learning management system (LMS) or are considering doing so. This web-based software package provides self-service and quick (often personalized) access to content in a dynamic environment. Learning management systems support administrative, reporting, and documentation activities. LMSs…

  20. From Discipline to Dynamic Pedagogy: A Re-Conceptualization of Classroom Management

    ERIC Educational Resources Information Center

    Davis, Jonathan Ryan

    2017-01-01

    The purpose of this article is to re-conceptualize the definition of classroom management, moving away from its traditional definition rooted in discipline and control toward a definition that focuses on the creation of a positive learning environment. Integrating innovative, culturally responsive classroom management theories, frameworks, and…

Top