Sample records for efficient dynamic programming

  1. The application of dynamic programming in production planning

    NASA Astrophysics Data System (ADS)

    Wu, Run

    2017-05-01

    Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.

  2. Optimal Least-Squares Unidimensional Scaling: Improved Branch-and-Bound Procedures and Comparison to Dynamic Programming

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Stahl, Stephanie

    2005-01-01

    There are two well-known methods for obtaining a guaranteed globally optimal solution to the problem of least-squares unidimensional scaling of a symmetric dissimilarity matrix: (a) dynamic programming, and (b) branch-and-bound. Dynamic programming is generally more efficient than branch-and-bound, but the former is limited to matrices with…

  3. Dynamic Programming for Structured Continuous Markov Decision Problems

    NASA Technical Reports Server (NTRS)

    Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu

    2004-01-01

    We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.

  4. Final Report from The University of Texas at Austin for DEGAS: Dynamic Global Address Space programming environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erez, Mattan; Yelick, Katherine; Sarkar, Vivek

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. Our approach is to provide an efficient and scalable programming model that can be adapted to application needs through the use of dynamic runtime features and domain-specific languages for computational kernels. We address the following technical challenges: Programmability: Rich set of programming constructs based on a Hierarchical Partitioned Global Address Space (HPGAS) model, demonstrated in UPC++. Scalability: Hierarchical locality control, lightweight communication (extended GASNet), and ef- ficient synchronization mechanisms (Phasers). Performance Portability:more » Just-in-time specialization (SEJITS) for generating hardware-specific code and scheduling libraries for domain-specific adaptive runtimes (Habanero). Energy Efficiency: Communication-optimal code generation to optimize energy efficiency by re- ducing data movement. Resilience: Containment Domains for flexible, domain-specific resilience, using state capture mechanisms and lightweight, asynchronous recovery mechanisms. Interoperability: Runtime and language interoperability with MPI and OpenMP to encourage broad adoption.« less

  5. Brayton advanced heat receiver development program

    NASA Technical Reports Server (NTRS)

    Heidenreich, G. R.; Downing, R. S.; Lacey, Dovie E.

    1989-01-01

    NASA Lewis Research Center is managing an advanced solar dynamic (ASD) space power program. The objective of the ASD program is to develop small and lightweight solar dynamic systems which show significant improvement in efficiency and specific mass over the baseline design derived from the Space Station Freedom technology. The advanced heat receiver development program is a phased program to design, fabricate and test elements of a 7-kWe heat-receiver/thermal-energy-storage subsystem. Receivers for both Brayton and Stirling heat engines are being developed under separate contracts. Phase I, described here, is the current eighteen month effort to design and perform critical technology experiments on innovative concepts designed to reduce mass without compromising thermal efficiency and reliability.

  6. Lean and Efficient Software: Whole-Program Optimization of Executables

    DTIC Science & Technology

    2015-09-30

    libraries. Many levels of library interfaces—where some libraries are dynamically linked and some are provided in binary form only—significantly limit...software at build time. The opportunity: Our objective in this project is to substantially improve the performance, size, and robustness of binary ...executables by using static and dynamic binary program analysis techniques to perform whole-program optimization directly on compiled programs

  7. Efficient iteration in data-parallel programs with irregular and dynamically distributed data structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Littlefield, R.J.

    1990-02-01

    To implement an efficient data-parallel program on a non-shared memory MIMD multicomputer, data and computations must be properly partitioned to achieve good load balance and locality of reference. Programs with irregular data reference patterns often require irregular partitions. Although good partitions may be easy to determine, they can be difficult or impossible to implement in programming languages that provide only regular data distributions, such as blocked or cyclic arrays. We are developing Onyx, a programming system that provides a shared memory model of distributed data structures and extends the concept of data distribution to include irregular and dynamic distributions. Thismore » provides a powerful means to specify irregular partitions. Perhaps surprisingly, programs using it can also execute efficiently. In this paper, we describe and evaluate the Onyx implementation of a model problem that repeatedly executes an irregular but fixed data reference pattern. On an NCUBE hypercube, the speed of the Onyx implementation is comparable to that of carefully handwritten message-passing code.« less

  8. Algorithms and software for nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.

    1989-01-01

    The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.

  9. Addressing Dynamic Issues of Program Model Checking

    NASA Technical Reports Server (NTRS)

    Lerda, Flavio; Visser, Willem

    2001-01-01

    Model checking real programs has recently become an active research area. Programs however exhibit two characteristics that make model checking difficult: the complexity of their state and the dynamic nature of many programs. Here we address both these issues within the context of the Java PathFinder (JPF) model checker. Firstly, we will show how the state of a Java program can be encoded efficiently and how this encoding can be exploited to improve model checking. Next we show how to use symmetry reductions to alleviate some of the problems introduced by the dynamic nature of Java programs. Lastly, we show how distributed model checking of a dynamic program can be achieved, and furthermore, how dynamic partitions of the state space can improve model checking. We support all our findings with results from applying these techniques within the JPF model checker.

  10. An Interactive Multiobjective Programming Approach to Combinatorial Data Analysis.

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Stahl, Stephanie

    2001-01-01

    Describes an interactive procedure for multiobjective asymmetric unidimensional seriation problems that uses a dynamic-programming algorithm to generate partially the efficient set of sequences for small to medium-sized problems and a multioperational heuristic to estimate the efficient set for larger problems. Applies the procedure to an…

  11. Some programming techniques for increasing program versatility and efficiency on CDC equipment

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.; Newsom, J. R.

    1978-01-01

    Five programming techniques used to decrease core and increase program versatility and efficiency are explained. The techniques are: (1) dynamic storage allocation, (2) automatic core-sizing and core-resizing, (3) matrix partitioning, (4) free field alphanumeric reads, and (5) incorporation of a data complex. The advantages of these techniques and the basic methods for employing them are explained and illustrated. Several actual program applications which utilize these techniques are described as examples.

  12. Efficient dynamic optimization of logic programs

    NASA Technical Reports Server (NTRS)

    Laird, Phil

    1992-01-01

    A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering.

  13. ISS Payload Operations: The Need for and Benefit of Responsive Planning

    NASA Technical Reports Server (NTRS)

    Nahay, Ed; Boster, Mandee

    2000-01-01

    International Space Station (ISS) payload operations are controlled through implementation of a payload operations plan. This plan, which represents the defined approach to payload operations in general, can vary in terms of level of definition. The detailed plan provides the specific sequence and timing of each component of a payload's operations. Such an approach to planning was implemented in the Spacelab program. The responsive plan provides a flexible approach to payload operations through generalization. A responsive approach to planning was implemented in the NASA/Mir Phase 1 program, and was identified as a need during the Skylab program. The current approach to ISS payload operations planning and control tends toward detailed planning, rather than responsive planning. The use of detailed plans provides for the efficient use of limited resources onboard the ISS. It restricts flexibility in payload operations, which is inconsistent with the dynamic nature of the ISS science program, and it restricts crew desires for flexibility and autonomy. Also, detailed planning is manpower intensive. The development and implementation of a responsive plan provides for a more dynamic, more accommodating, and less manpower intensive approach to planning. The science program becomes more dynamic and responsive as the plan provides flexibility to accommodate real-time science accomplishments. Communications limitations and the crew desire for flexibility and autonomy in plan implementation are readily accommodated with responsive planning. Manpower efficiencies are accomplished through a reduction in requirements collection and coordination, plan development, and maintenance. Through examples and assessments, this paper identifies the need to transition from detailed to responsive plans for ISS payload operations. Examples depict specific characteristics of the plans. Assessments identify the following: the means by which responsive plans accommodate the dynamic nature of science programs and the crew desire for flexibility; the means by which responsive plans readily accommodate ISS communications constraints; manpower efficiencies to be achieved through use of responsive plans; and the implications of responsive planning relative to resource utilization efficiency.

  14. Application of dynamic milling in stainless steel processing

    NASA Astrophysics Data System (ADS)

    Shan, Wenju

    2017-09-01

    This paper mainly introduces the method of parameter setting for NC programming of stainless steel parts by dynamic milling. Stainless steel is of high plasticity and toughness, serious hard working, large cutting force, high temperature in cutting area and easy wear of tool. It is difficult to process material. Dynamic motion technology is the newest NC programming technology of Mastercam software. It is an advanced machining idea. The tool path generated by the dynamic motion technology is more smooth, more efficient and more stable in the machining process. Dynamic motion technology is very suitable for cutting hard machining materials.

  15. Transportation Planning and ITS: Putting the Pieces Together

    DOT National Transportation Integrated Search

    2013-11-01

    Both the Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs have similar overarching goals to improve surface transportation system efficiency and individual traveler mobility. However, each program ha...

  16. Method for resource control in parallel environments using program organization and run-time support

    NASA Technical Reports Server (NTRS)

    Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)

    2001-01-01

    A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.

  17. Method for resource control in parallel environments using program organization and run-time support

    NASA Technical Reports Server (NTRS)

    Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)

    1999-01-01

    A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.

  18. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less

  19. An application of nonlinear programming to the design of regulators of a linear-quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a nonlinear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer. One concerns helicopter longitudinal dynamics and the other the flight dynamics of an aerodynamically unstable aircraft.

  20. An Approximate Dynamic Programming Mode for Optimal MEDEVAC Dispatching

    DTIC Science & Technology

    2015-03-26

    over the myopic policy. This indicates the ADP policy is efficiently managing resources by 28 not immediately sending the nearest available MEDEVAC...DISPATCHING THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology...medical evacuation (MEDEVAC) dispatch policies. To solve the MDP, we apply an ap- proximate dynamic programming (ADP) technique. The problem of deciding

  1. Selective, Embedded, Just-In-Time Specialization (SEJITS): Portable Parallel Performance from Sequential, Productive, Embedded Domain-Specific Languages

    DTIC Science & Technology

    2012-12-01

    identity operation SIMD Single instruction, multiple datastream parallel computing Scala A byte-compiled programming language featuring dynamic type...Specific Languages 5a. CONTRACT NUMBER FA8750-10-1-0191 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) Armando Fox 5d...application performance, but usually must rely on efficiency programmers who are experts in explicit parallel programming to achieve it. Since such efficiency

  2. A space-efficient algorithm for local similarities.

    PubMed

    Huang, X Q; Hardison, R C; Miller, W

    1990-10-01

    Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.

  3. A NASTRAN/TREETOPS solution to a flexible, multi-body dynamics and controls problem on a UNIX workstation

    NASA Technical Reports Server (NTRS)

    Benavente, Javier E.; Luce, Norris R.

    1989-01-01

    Demands for nonlinear time history simulations of large, flexible multibody dynamic systems has created a need for efficient interfaces between finite-element modeling programs and time-history simulations. One such interface, TREEFLX, an interface between NASTRAN and TREETOPS, a nonlinear dynamics and controls time history simulation for multibody structures, is presented and demonstrated via example using the proposed Space Station Mobile Remote Manipulator System (MRMS). The ability to run all three programs (NASTRAN, TREEFLX and TREETOPS), in addition to other programs used for controller design and model reduction (such as DMATLAB and TREESEL, both described), under a UNIX Workstation environment demonstrates the flexibility engineers now have in designing, developing and testing control systems for dynamically complex systems.

  4. An Adaptive Dynamic Pointing Assistance Program to Help People with Multiple Disabilities Improve Their Computer Pointing Efficiency with Hand Swing through a Standard Mouse

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Shih, Ching-Tien; Wu, Hsiao-Ling

    2010-01-01

    The latest research adopted software technology to redesign the mouse driver, and turned a mouse into a useful pointing assistive device for people with multiple disabilities who cannot easily or possibly use a standard mouse, to improve their pointing performance through a new operation method, Extended Dynamic Pointing Assistive Program (EDPAP),…

  5. Dynamic programming in parallel boundary detection with application to ultrasound intima-media segmentation.

    PubMed

    Zhou, Yuan; Cheng, Xinyao; Xu, Xiangyang; Song, Enmin

    2013-12-01

    Segmentation of carotid artery intima-media in longitudinal ultrasound images for measuring its thickness to predict cardiovascular diseases can be simplified as detecting two nearly parallel boundaries within a certain distance range, when plaque with irregular shapes is not considered. In this paper, we improve the implementation of two dynamic programming (DP) based approaches to parallel boundary detection, dual dynamic programming (DDP) and piecewise linear dual dynamic programming (PL-DDP). Then, a novel DP based approach, dual line detection (DLD), which translates the original 2-D curve position to a 4-D parameter space representing two line segments in a local image segment, is proposed to solve the problem while maintaining efficiency and rotation invariance. To apply the DLD to ultrasound intima-media segmentation, it is imbedded in a framework that employs an edge map obtained from multiplication of the responses of two edge detectors with different scales and a coupled snake model that simultaneously deforms the two contours for maintaining parallelism. The experimental results on synthetic images and carotid arteries of clinical ultrasound images indicate improved performance of the proposed DLD compared to DDP and PL-DDP, with respect to accuracy and efficiency. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. A short note on dynamic programming in a band.

    PubMed

    Gibrat, Jean-François

    2018-06-15

    Third generation sequencing technologies generate long reads that exhibit high error rates, in particular for insertions and deletions which are usually the most difficult errors to cope with. The only exact algorithm capable of aligning sequences with insertions and deletions is a dynamic programming algorithm. In this note, for the sake of efficiency, we consider dynamic programming in a band. We show how to choose the band width in function of the long reads' error rates, thus obtaining an [Formula: see text] algorithm in space and time. We also propose a procedure to decide whether this algorithm, when applied to semi-global alignments, provides the optimal score. We suggest that dynamic programming in a band is well suited to the problem of aligning long reads between themselves and can be used as a core component of methods for obtaining a consensus sequence from the long reads alone. The function implementing the dynamic programming algorithm in a band is available, as a standalone program, at: https://forgemia.inra.fr/jean-francois.gibrat/BAND_DYN_PROG.git.

  7. Air Quality Programs and Provisions of the Intermodal Surface Transportation Efficiency Act of 1991

    DOT National Transportation Integrated Search

    2012-11-01

    The US DOT sponsored Dynamic Mobility Applications (DMA) program seeks to identify, develop, and deploy applications that leverage the full potential of connected vehicles, travelers and infrastructure to enhance current operational practices and tra...

  8. Strategies for Energy Efficient Resource Management of Hybrid Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dong; Supinski, Bronis de; Schulz, Martin

    2013-01-01

    Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less

  9. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  10. Exploration of government policy structure which support and block energy transition process in indonesia using system dynamics model

    NASA Astrophysics Data System (ADS)

    Destyanto, A. R.; Silalahi, T. D.; Hidayatno, A.

    2017-11-01

    System dynamic modeling is widely used to predict and simulate the energy system in several countries. One of the applications of system dynamics is to evaluate national energy policy alternatives, and energy efficiency analysis. Using system dynamic modeling, this research aims to evaluate the energy transition policy that has been implemented in Indonesia on the past conversion program of kerosene to LPG for household cook fuel consumption, which considered as successful energy transition program implemented since 2007. This research is important since Indonesia considered not yet succeeded to execute another energy transition program on conversion program of oil fuel to gas fuel for transportation that has started since 1989. The aim of this research is to explore which policy intervention that has significant contribution to support or even block the conversion program. Findings in this simulation show that policy intervention to withdraw the kerosene supply and government push to increase production capacity of the support equipment industries (gas stove, regulator, and LPG Cylinder) is the main influence on the success of the program conversion program.

  11. Optimal approach to quantum communication using dynamic programming.

    PubMed

    Jiang, Liang; Taylor, Jacob M; Khaneja, Navin; Lukin, Mikhail D

    2007-10-30

    Reliable preparation of entanglement between distant systems is an outstanding problem in quantum information science and quantum communication. In practice, this has to be accomplished by noisy channels (such as optical fibers) that generally result in exponential attenuation of quantum signals at large distances. A special class of quantum error correction protocols, quantum repeater protocols, can be used to overcome such losses. In this work, we introduce a method for systematically optimizing existing protocols and developing more efficient protocols. Our approach makes use of a dynamic programming-based searching algorithm, the complexity of which scales only polynomially with the communication distance, letting us efficiently determine near-optimal solutions. We find significant improvements in both the speed and the final-state fidelity for preparing long-distance entangled states.

  12. Expansion and improvements of the FORMA system for response and load analysis. Volume 1: Programming manual

    NASA Technical Reports Server (NTRS)

    Wohlen, R. L.

    1976-01-01

    Techniques are presented for the solution of structural dynamic systems on an electronic digital computer using FORMA (FORTRAN Matrix Analysis). FORMA is a library of subroutines coded in FORTRAN 4 for the efficient solution of structural dynamics problems. These subroutines are in the form of building blocks that can be put together to solve a large variety of structural dynamics problems. The obvious advantage of the building block approach is that programming and checkout time are limited to that required for putting the blocks together in the proper order.

  13. The influence of dynamic inflow and torsional flexibility on rotor damping in forward flight from symbolically generated equations

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Warmbrodt, W.

    1985-01-01

    The combined effects of blade torsion and dynamic inflow on the aeroelastic stability of an elastic rotor blade in forward flight are studied. The governing sets of equations of motion (fully nonlinear, linearized, and multiblade equations) used in this study are derived symbolically using a program written in FORTRAN. Stability results are presented for different structural models with and without dynamic inflow. A combination of symbolic and numerical programs at the proper stage in the derivation process makes the obtainment of final stability results an efficient and straightforward procedure.

  14. DEGAS: Dynamic Exascale Global Address Space Programming Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less

  15. Human motion planning based on recursive dynamics and optimal control techniques

    NASA Technical Reports Server (NTRS)

    Lo, Janzen; Huang, Gang; Metaxas, Dimitris

    2002-01-01

    This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.

  16. Gas dynamic design of the pipe line compressor with 90% efficiency. Model test approval

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Rekstin, A.; Soldatova, K.

    2015-08-01

    Gas dynamic design of the pipe line compressor 32 MW was made for PAO SMPO (Sumy, Ukraine). The technical specification requires compressor efficiency of 90%. The customer offered favorable scheme - single-stage design with console impeller and axial inlet. The authors used the standard optimization methodology of 2D impellers. The original methodology of internal scroll profiling was used to minimize efficiency losses. Radically improved 5th version of the Universal modeling method computer programs was used for precise calculation of expected performances. The customer fulfilled model tests in a 1:2 scale. Tests confirmed the calculated parameters at the design point (maximum efficiency of 90%) and in the whole range of flow rates. As far as the authors know none of compressors have achieved such efficiency. The principles and methods of gas-dynamic design are presented below. The data of the 32 MW compressor presented by the customer in their report at the 16th International Compressor conference (September 2014, Saint- Petersburg) and later transferred to the authors.

  17. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  18. Dynamic equilibrium strategy for drought emergency temporary water transfer and allocation management

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Ma, Ning; Lv, Chengwei

    2016-08-01

    Efficient water transfer and allocation are critical for disaster mitigation in drought emergencies. This is especially important when the different interests of the multiple decision makers and the fluctuating water resource supply and demand simultaneously cause space and time conflicts. To achieve more effective and efficient water transfers and allocations, this paper proposes a novel optimization method with an integrated bi-level structure and a dynamic strategy, in which the bi-level structure works to deal with space dimension conflicts in drought emergencies, and the dynamic strategy is used to deal with time dimension conflicts. Combining these two optimization methods, however, makes calculation complex, so an integrated interactive fuzzy program and a PSO-POA are combined to develop a hybrid-heuristic algorithm. The successful application of the proposed model in a real world case region demonstrates its practicality and efficiency. Dynamic cooperation between multiple reservoirs under the coordination of a global regulator reflects the model's efficiency and effectiveness in drought emergency water transfer and allocation, especially in a fluctuating environment. On this basis, some corresponding management recommendations are proposed to improve practical operations.

  19. High-efficiency helical traveling-wave tube with dynamic velocity taper and advanced multistage depressed collector

    NASA Technical Reports Server (NTRS)

    Curren, Arthur N.; Palmer, Raymond W.; Force, Dale A.; Dombro, Louis; Long, James A.

    1987-01-01

    A NASA-sponsored research and development contract has been established with the Watkins-Johnson Company to fabricate high-efficiency 20-watt helical traveling wave tubes (TWTs) operating at 8.4 to 8.43 GHz. The TWTs employ dynamic velocity tapers (DVTs) and advanced multistage depressed collectors (MDCs) having electrodes with low secondary electron emission characteristics. The TWT designs include two different DVTs; one for maximum efficiency and the other for minimum distortion and phase shift. The MDC designs include electrodes of untreated and ion-textured graphite as well as copper which has been treated for secondary electron emission suppression. Objectives of the program include achieving at least 55 percent overall efficiency. Tests with the first TWTs (with undepressed collectors) indicate good agreement between predicted and measured RF efficiencies with as high as 30 percent improvement in RF efficiency over conventional helix designs.

  20. Heuristic reusable dynamic programming: efficient updates of local sequence alignment.

    PubMed

    Hong, Changjin; Tewfik, Ahmed H

    2009-01-01

    Recomputation of the previously evaluated similarity results between biological sequences becomes inevitable when researchers realize errors in their sequenced data or when the researchers have to compare nearly similar sequences, e.g., in a family of proteins. We present an efficient scheme for updating local sequence alignments with an affine gap model. In principle, using the previous matching result between two amino acid sequences, we perform a forward-backward alignment to generate heuristic searching bands which are bounded by a set of suboptimal paths. Given a correctly updated sequence, we initially predict a new score of the alignment path for each contour to select the best candidates among them. Then, we run the Smith-Waterman algorithm in this confined space. Furthermore, our heuristic alignment for an updated sequence shows that it can be further accelerated by using reusable dynamic programming (rDP), our prior work. In this study, we successfully validate "relative node tolerance bound" (RNTB) in the pruned searching space. Furthermore, we improve the computational performance by quantifying the successful RNTB tolerance probability and switch to rDP on perturbation-resilient columns only. In our searching space derived by a threshold value of 90 percent of the optimal alignment score, we find that 98.3 percent of contours contain correctly updated paths. We also find that our method consumes only 25.36 percent of the runtime cost of sparse dynamic programming (sDP) method, and to only 2.55 percent of that of a normal dynamic programming with the Smith-Waterman algorithm.

  1. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  2. Dynamic programming-based hot spot identification approach for pedestrian crashes.

    PubMed

    Medury, Aditya; Grembek, Offer

    2016-08-01

    Network screening techniques are widely used by state agencies to identify locations with high collision concentration, also referred to as hot spots. However, most of the research in this regard has focused on identifying highway segments that are of concern to automobile collisions. In comparison, pedestrian hot spot detection has typically focused on analyzing pedestrian crashes in specific locations, such as at/near intersections, mid-blocks, and/or other crossings, as opposed to long stretches of roadway. In this context, the efficiency of the some of the widely used network screening methods has not been tested. Hence, in order to address this issue, a dynamic programming-based hot spot identification approach is proposed which provides efficient hot spot definitions for pedestrian crashes. The proposed approach is compared with the sliding window method and an intersection buffer-based approach. The results reveal that the dynamic programming method generates more hot spots with a higher number of crashes, while providing small hot spot segment lengths. In comparison, the sliding window method is shown to suffer from shortcomings due to a first-come-first-serve approach vis-à-vis hot spot identification and a fixed hot spot window length assumption. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak

    1999-01-01

    The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.

  4. Effects of virtual reality programs on balance in functional ankle instability

    PubMed Central

    Kim, Ki-Jong; Heo, Myoung

    2015-01-01

    [Purpose] The aim of present study was to identify the impact that recent virtual reality training programs used in a variety of fields have had on the ankle’s static and dynamic senses of balance among subjects with functional ankle instability. [Subjects and Methods] This study randomly divided research subjects into two groups, a strengthening exercise group (Group I) and a balance exercise group (Group II), with each group consisting of 10 people. A virtual reality program was performed three times a week for four weeks. Exercises from the Nintendo Wii Fit Plus program were applied to each group for twenty minutes along with ten minutes of warming up and wrap-up exercises. [Results] Group II showed a significant decrease of post-intervention static and dynamic balance overall in the anterior-posterior, and mediolateral directions, compared with the pre-intervention test results. In comparison of post-intervention static and dynamic balance between Group I and Group II, a significant decrease was observed overall. [Conclusion] Virtual reality programs improved the static balance and dynamic balance of subjects with functional ankle instability. Virtual reality programs can be used more safely and efficiently if they are implemented under appropriate monitoring by a physiotherapist. PMID:26644652

  5. Effects of virtual reality programs on balance in functional ankle instability.

    PubMed

    Kim, Ki-Jong; Heo, Myoung

    2015-10-01

    [Purpose] The aim of present study was to identify the impact that recent virtual reality training programs used in a variety of fields have had on the ankle's static and dynamic senses of balance among subjects with functional ankle instability. [Subjects and Methods] This study randomly divided research subjects into two groups, a strengthening exercise group (Group I) and a balance exercise group (Group II), with each group consisting of 10 people. A virtual reality program was performed three times a week for four weeks. Exercises from the Nintendo Wii Fit Plus program were applied to each group for twenty minutes along with ten minutes of warming up and wrap-up exercises. [Results] Group II showed a significant decrease of post-intervention static and dynamic balance overall in the anterior-posterior, and mediolateral directions, compared with the pre-intervention test results. In comparison of post-intervention static and dynamic balance between Group I and Group II, a significant decrease was observed overall. [Conclusion] Virtual reality programs improved the static balance and dynamic balance of subjects with functional ankle instability. Virtual reality programs can be used more safely and efficiently if they are implemented under appropriate monitoring by a physiotherapist.

  6. One of the Countries That Turkey Models: Finland Secondary Education Social Studies Curriculum

    ERIC Educational Resources Information Center

    Kop, Yasar

    2017-01-01

    Teaching of social studies has basis of education dynamism that governments maintain to raise qualified and efficient citizens. That's why; being examined programs in question has importance for the global citizen concept which comes up with globalization. Therefore, how to be raised efficient citizens who build both governments' and world's…

  7. Demonstration Program for Low-Cost, High-Energy-Saving Dynamic Windows

    DTIC Science & Technology

    2014-09-01

    Design The scope of this project was to demonstrate the impact of dynamic windows via energy savings and HVAC peak-load reduction; to validate the...temperature and glare. While the installed dynamic window system does not directly control the HVAC or lighting of the facility, those systems are designed ...optimize energy efficiency and HVAC load management. The conversion to inoperable windows caused an unforeseen reluctance to accept the design and

  8. Space station dynamics, attitude control and momentum management

    NASA Technical Reports Server (NTRS)

    Sunkel, John W.; Singh, Ramen P.; Vengopal, Ravi

    1989-01-01

    The Space Station Attitude Control System software test-bed provides a rigorous environment for the design, development and functional verification of GN and C algorithms and software. The approach taken for the simulation of the vehicle dynamics and environmental models using a computationally efficient algorithm is discussed. The simulation includes capabilities for docking/berthing dynamics, prescribed motion dynamics associated with the Mobile Remote Manipulator System (MRMS) and microgravity disturbances. The vehicle dynamics module interfaces with the test-bed through the central Communicator facility which is in turn driven by the Station Control Simulator (SCS) Executive. The Communicator addresses issues such as the interface between the discrete flight software and the continuous vehicle dynamics, and multi-programming aspects such as the complex flow of control in real-time programs. Combined with the flight software and redundancy management modules, the facility provides a flexible, user-oriented simulation platform.

  9. Dynamic modeling and verification of an energy-efficient greenhouse with an aquaponic system using TRNSYS

    NASA Astrophysics Data System (ADS)

    Amin, Majdi Talal

    Currently, there is no integrated dynamic simulation program for an energy efficient greenhouse coupled with an aquaponic system. This research is intended to promote the thermal management of greenhouses in order to provide sustainable food production with the lowest possible energy use and material waste. A brief introduction of greenhouses, passive houses, energy efficiency, renewable energy systems, and their applications are included for ready reference. An experimental working scaled-down energy-efficient greenhouse was built to verify and calibrate the results of a dynamic simulation model made using TRNSYS software. However, TRNSYS requires the aid of Google SketchUp to develop 3D building geometry. The simulation model was built following the passive house standard as closely as possible. The new simulation model was then utilized to design an actual greenhouse with Aquaponics. It was demonstrated that the passive house standard can be applied to improve upon conventional greenhouse performance, and that it is adaptable to different climates. The energy-efficient greenhouse provides the required thermal environment for fish and plant growth, while eliminating the need for conventional cooling and heating systems.

  10. Rupture Dynamics and Scaling Behavior of Hydraulically Stimulated Micro-Earthquakes in a Shale Reservoir

    NASA Astrophysics Data System (ADS)

    Viegas, G. F.; Urbancic, T.; Baig, A. M.

    2014-12-01

    In hydraulic fracturing completion programs fluids are injected under pressure into fractured rock formations to open escape pathways for trapped hydrocarbons along pre-existing and newly generated fractures. To characterize the failure process, we estimate static and dynamic source and rupture parameters, such as dynamic and static stress drop, radiated energy, seismic efficiency, failure modes, failure plane orientations and dimensions, and rupture velocity to investigate the rupture dynamics and scaling relations of micro-earthquakes induced during a hydraulic fracturing shale completion program in NE British Columbia, Canada. The relationships between the different parameters combined with the in-situ stress field and rock properties provide valuable information on the rupture process giving insights into the generation and development of the fracture network. Approximately 30,000 micro-earthquakes were recorded using three multi-sensor arrays of high frequency geophones temporarily placed close to the treatment area at reservoir depth (~2km). On average the events have low radiated energy, low dynamic stress and low seismic efficiency, consistent with the obtained slow rupture velocities. Events fail in overshoot mode (slip weakening failure model), with fluids lubricating faults and decreasing friction resistance. Events occurring in deeper formations tend to have faster rupture velocities and are more efficient in radiating energy. Variations in rupture velocity tend to correlate with variation in depth, fault azimuth and elapsed time, reflecting a dominance of the local stress field over other factors. Several regions with different characteristic failure modes are identifiable based on coherent stress drop, seismic efficiency, rupture velocities and fracture orientations. Variations of source parameters with rock rheology and hydro-fracture fluids are also observed. Our results suggest that the spatial and temporal distribution of events with similar characteristic rupture behaviors can be used to determine reservoir geophysical properties, constrain reservoir geo-mechanical models, classify dynamic rupture processes for fracture models and improve fracture treatment designs.

  11. Solving Equations of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Lim, Christopher

    2007-01-01

    Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.

  12. BWM*: A Novel, Provable, Ensemble-based Dynamic Programming Algorithm for Sparse Approximations of Computational Protein Design.

    PubMed

    Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R

    2016-06-01

    Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem.

  13. Structural-Vibration-Response Data Analysis

    NASA Technical Reports Server (NTRS)

    Smith, W. R.; Hechenlaible, R. N.; Perez, R. C.

    1983-01-01

    Computer program developed as structural-vibration-response data analysis tool for use in dynamic testing of Space Shuttle. Program provides fast and efficient time-domain least-squares curve-fitting procedure for reducing transient response data to obtain structural model frequencies and dampings from free-decay records. Procedure simultaneously identifies frequencies, damping values, and participation factors for noisy multiple-response records.

  14. User-Assisted Store Recycling for Dynamic Task Graph Schedulers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt, Mehmet Can; Krishnamoorthy, Sriram; Agrawal, Gagan

    The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less

  15. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    PubMed

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  16. Single-Cell RNA-Seq Reveals Dynamic Early Embryonic-like Programs during Chemical Reprogramming.

    PubMed

    Zhao, Ting; Fu, Yao; Zhu, Jialiang; Liu, Yifang; Zhang, Qian; Yi, Zexuan; Chen, Shi; Jiao, Zhonggang; Xu, Xiaochan; Xu, Junquan; Duo, Shuguang; Bai, Yun; Tang, Chao; Li, Cheng; Deng, Hongkui

    2018-06-12

    Chemical reprogramming provides a powerful platform for exploring the molecular dynamics that lead to pluripotency. Although previous studies have uncovered an intermediate extraembryonic endoderm (XEN)-like state during this process, the molecular underpinnings of pluripotency acquisition remain largely undefined. Here, we profile 36,199 single-cell transcriptomes at multiple time points throughout a highly efficient chemical reprogramming system using RNA-sequencing and reconstruct their progression trajectories. Through identifying sequential molecular events, we reveal that the dynamic early embryonic-like programs are key aspects of successful reprogramming from XEN-like state to pluripotency, including the concomitant transcriptomic signatures of two-cell (2C) embryonic-like and early pluripotency programs and the epigenetic signature of notable genome-wide DNA demethylation. Moreover, via enhancing the 2C-like program by fine-tuning chemical treatment, the reprogramming process is remarkably accelerated. Collectively, our findings offer a high-resolution dissection of cell fate dynamics during chemical reprogramming and shed light on mechanistic insights into the nature of induced pluripotency. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. A concept for a fuel efficient flight planning aid for general aviation

    NASA Technical Reports Server (NTRS)

    Collins, B. P.; Haines, A. L.; Wales, C. J.

    1982-01-01

    A core equation for estimation of fuel burn from path profile data was developed. This equation was used as a necessary ingredient in a dynamic program to define a fuel efficient flight path. The resultant algorithm is oriented toward use by general aviation. The pilot provides a description of the desired ground track, standard aircraft parameters, and weather at selected waypoints. The algorithm then derives the fuel efficient altitudes and velocities at the waypoints.

  18. Approximate dynamic programming for optimal stationary control with control-dependent noise.

    PubMed

    Jiang, Yu; Jiang, Zhong-Ping

    2011-12-01

    This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.

  19. Three pillars for achieving quantum mechanical molecular dynamics simulations of huge systems: Divide-and-conquer, density-functional tight-binding, and massively parallel computation.

    PubMed

    Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi

    2016-08-05

    The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. An algorithm for the solution of dynamic linear programs

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1989-01-01

    The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation scheme.

  1. Research of the Aerophysics Institute for the Strategic Technology Office (DARPA)

    DTIC Science & Technology

    1975-06-30

    19. (continued) 6. Unstable Optical Resonator Cavities 7. Laser Metal Screening Program 8. Ultraviolet & Blue-Green Lasers 9. Efficient Metal...Vapor Lasers 10. Atomic Transition Probabilities 11. Computer Modeling of Laser Dynamic 12. Startified Ocean Wakes L0. (continued) In the... laser area, the major task was the screening of atomic vapors, particularly metal vapors, for new, efficient lasers in the visible and ultra

  2. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. SUSTAINABLE MSW MANAGEMENT STRATEGIES IN THE UNITED STATES

    EPA Science Inventory

    Under increasing pressure to minimize potential environmental burdens and costs for municipal solid waste (MSW) management, state and local governments often must modify programs and adopt more efficient integrated MSW management strategies that reflect dynamic shifts in MSW mana...

  4. A new 2D segmentation method based on dynamic programming applied to computer aided detection in mammography.

    PubMed

    Timp, Sheila; Karssemeijer, Nico

    2004-05-01

    Mass segmentation plays a crucial role in computer-aided diagnosis (CAD) systems for classification of suspicious regions as normal, benign, or malignant. In this article we present a robust and automated segmentation technique--based on dynamic programming--to segment mass lesions from surrounding tissue. In addition, we propose an efficient algorithm to guarantee resulting contours to be closed. The segmentation method based on dynamic programming was quantitatively compared with two other automated segmentation methods (region growing and the discrete contour model) on a dataset of 1210 masses. For each mass an overlap criterion was calculated to determine the similarity with manual segmentation. The mean overlap percentage for dynamic programming was 0.69, for the other two methods 0.60 and 0.59, respectively. The difference in overlap percentage was statistically significant. To study the influence of the segmentation method on the performance of a CAD system two additional experiments were carried out. The first experiment studied the detection performance of the CAD system for the different segmentation methods. Free-response receiver operating characteristics analysis showed that the detection performance was nearly identical for the three segmentation methods. In the second experiment the ability of the classifier to discriminate between malignant and benign lesions was studied. For region based evaluation the area Az under the receiver operating characteristics curve was 0.74 for dynamic programming, 0.72 for the discrete contour model, and 0.67 for region growing. The difference in Az values obtained by the dynamic programming method and region growing was statistically significant. The differences between other methods were not significant.

  5. Water resources planning and management : A stochastic dual dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Goor, Q.; Pinte, D.; Tilmant, A.

    2008-12-01

    Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.

  6. DYNA3D: A computer code for crashworthiness engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallquist, J.O.; Benson, D.J.

    1986-09-01

    A finite element program with crashworthiness applications has been developed at LLNL. DYNA3D, an explicit, fully vectorized, finite deformation structural dynamics program, has four capabilities that are critical for the efficient and realistic modeling crash phenomena: (1) fully optimized nonlinear solid, shell, and beam elements for representing a structure; (2) a broad range of constitutive models for simulating material behavior; (3) sophisticated contact algorithms for impact interactions; (4) a rigid body capability to represent the bodies away from the impact region at a greatly reduced cost without sacrificing accuracy in the momentum calculations. Basic methodologies of the program are brieflymore » presented along with several crashworthiness calculations. Efficiencies of the Hughes-Liu and Belytschko-Tsay shell formulations are considered.« less

  7. Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program

    ERIC Educational Resources Information Center

    Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.

    2004-01-01

    The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…

  8. Towards a Certified Lightweight Array Bound Checker for Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pichardie, David

    2009-01-01

    Dynamic array bound checks are crucial elements for the security of a Java Virtual Machines. These dynamic checks are however expensive and several static analysis techniques have been proposed to eliminate explicit bounds checks. Such analyses require advanced numerical and symbolic manipulations that 1) penalize bytecode loading or dynamic compilation, 2) complexify the trusted computing base. Following the Foundational Proof Carrying Code methodology, our goal is to provide a lightweight bytecode verifier for eliminating array bound checks that is both efficient and trustable. In this work, we define a generic relational program analysis for an imperative, stackoriented byte code language with procedures, arrays and global variables and instantiate it with a relational abstract domain as polyhedra. The analysis has automatic inference of loop invariants and method pre-/post-conditions, and efficient checking of analysis results by a simple checker. Invariants, which can be large, can be specialized for proving a safety policy using an automatic pruning technique which reduces their size. The result of the analysis can be checked efficiently by annotating the program with parts of the invariant together with certificates of polyhedral inclusions. The resulting checker is sufficiently simple to be entirely certified within the Coq proof assistant for a simple fragment of the Java bytecode language. During the talk, we will also report on our ongoing effort to scale this approach for the full sequential JVM.

  9. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  10. Design and Evaluation of a Dynamic Dilemma Zone System for a High Speed Rural Intersection : Research Summary

    DOT National Transportation Integrated Search

    2012-08-01

    Improving traffic safety is a priority transportation issue. A tremendous amount of : resources has been invested on improving safety and efficiency at signalized : intersections. Although programs such as driver education, red-light camera : deploym...

  11. Prototype road weather performance management (RW-PM) tool and Minnesota Department of Transportation (MnDOT) field evaluation.

    DOT National Transportation Integrated Search

    2017-01-01

    FHWAs Road Weather Management Program developed a Prototype Road Weather Management (RW-PM) Tool to help DOTs maximize the effectiveness of their maintenance resources and efficiently adjust deployments dynamically, as road conditions and traffic ...

  12. XPRESS: eXascale PRogramming Environment and System Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brightwell, Ron; Sterling, Thomas; Koniges, Alice

    The XPRESS Project is one of four major projects of the DOE Office of Science Advanced Scientific Computing Research X-stack Program initiated in September, 2012. The purpose of XPRESS is to devise an innovative system software stack to enable practical and useful exascale computing around the end of the decade with near-term contributions to efficient and scalable operation of trans-Petaflops performance systems in the next two to three years; both for DOE mission-critical applications. To this end, XPRESS directly addresses critical challenges in computing of efficiency, scalability, and programmability through introspective methods of dynamic adaptive resource management and task scheduling.

  13. What percentage of the Cuban HIV-AIDS epidemic is known?

    PubMed

    de Arazoza, Héctor; Lounes, Rachid; Pérez, Jorge; Hoang, Thu

    2003-01-01

    The data for the Cuban HIV-AIDS epidemic from 1986 to 2000 were presented. With the purpose of evaluating the efficiency of the HIV detection system, two methods were used to estimate the size of the HIV-infected population, backcalculation and a dynamical model. From these models it can be estimated that in the worst scenario 75% of the HIV-infected persons are known and in the best case 87% of the total number of persons that have been infected with HIV have been detected by the National Program. These estimates can be taken as a measure of the efficiency of the detection program for HIV-infected persons.

  14. A low-power, high-efficiency Ka-band TWTA

    NASA Technical Reports Server (NTRS)

    Curren, A. N.; Dayton, J. A., Jr.; Palmer, R. W.; Force, D. A.; Tamashiro, R. N.; Wilson, J. F.; Dombro, L.; Harvey, W. L.

    1991-01-01

    A NASA-sponsored program is described for developing a high-efficiency low-power TWTA operating at 32 GHz and meeting the requirements for the Cassini Mission to study Saturn. The required RF output power of the helix TWT is 10 watts, while the dc power from the spacecraft is limited to about 30 watts. The performance level permits the transmission to earth of all mission data. Several novel technologies are incorporated into the TWT to achieve this efficiency including an advanced dynamic velocity taper characterized by a nonlinear reduction in pitch in the output helix section and a multistage depressed collector employing copper electrodes treated for secondary electron-emission suppression. Preliminary program results are encouraging: RF output power of 10.6 watts is obtained at 14-mA beam current and 5.2-kV helix voltage with overall TWT efficiency exceeding 40 percent.

  15. A low-power, high-efficiency Ka-band TWTA

    NASA Astrophysics Data System (ADS)

    Curren, A. N.; Dayton, J. A., Jr.; Palmer, R. W.; Force, D. A.; Tamashiro, R. N.; Wilson, J. F.; Dombro, L.; Harvey, W. L.

    1991-11-01

    A NASA-sponsored program is described for developing a high-efficiency low-power TWTA operating at 32 GHz and meeting the requirements for the Cassini Mission to study Saturn. The required RF output power of the helix TWT is 10 watts, while the dc power from the spacecraft is limited to about 30 watts. The performance level permits the transmission to earth of all mission data. Several novel technologies are incorporated into the TWT to achieve this efficiency including an advanced dynamic velocity taper characterized by a nonlinear reduction in pitch in the output helix section and a multistage depressed collector employing copper electrodes treated for secondary electron-emission suppression. Preliminary program results are encouraging: RF output power of 10.6 watts is obtained at 14-mA beam current and 5.2-kV helix voltage with overall TWT efficiency exceeding 40 percent.

  16. Future Opportunities for Dynamic Power Systems for NASA Missions

    NASA Technical Reports Server (NTRS)

    Shaltens, Richard K.

    2007-01-01

    Dynamic power systems have the potential to be used in Radioisotope Power Systems (RPS) and Fission Surface Power Systems (FSPS) to provide high efficiency, reliable and long life power generation for future NASA applications and missions. Dynamic power systems have been developed by NASA over the decades, but none have ever operated in space. Advanced Stirling convertors are currently being developed at the NASA Glenn Research Center. These systems have demonstrated high efficiencies to enable high system specific power (>8 W(sub e)/kg) for 100 W(sub e) class Advanced Stirling Radioisotope Generators (ASRG). The ASRG could enable significant extended and expanded operation on the Mars surface and on long-life deep space missions. In addition, advanced high power Stirling convertors (>150 W(sub e)/kg), for use with surface fission power systems, could provide power ranging from 30 to 50 kWe, and would be enabling for both lunar and Mars exploration. This paper will discuss the status of various energy conversion options currently under development by NASA Glenn for the Radioisotope Power System Program for NASA s Science Mission Directorate (SMD) and the Prometheus Program for the Exploration Systems Mission Directorate (ESMD).

  17. The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) program: McDonnell-Douglas Helicopter Company achievements

    NASA Technical Reports Server (NTRS)

    Toossi, Mostafa; Weisenburger, Richard; Hashemi-Kia, Mostafa

    1993-01-01

    This paper presents a summary of some of the work performed by McDonnell Douglas Helicopter Company under NASA Langley-sponsored rotorcraft structural dynamics program known as DAMVIBS (Design Analysis Methods for VIBrationS). A set of guidelines which is applicable to dynamic modeling, analysis, testing, and correlation of both helicopter airframes and a large variety of structural finite element models is presented. Utilization of these guidelines and the key features of their applications to vibration modeling of helicopter airframes are discussed. Correlation studies with the test data, together with the development and applications of a set of efficient finite element model checkout procedures, are demonstrated on a large helicopter airframe finite element model. Finally, the lessons learned and the benefits resulting from this program are summarized.

  18. Multidisciplinary analysis of actively controlled large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Cooper, Paul A.; Young, John W.; Sutter, Thomas R.

    1986-01-01

    The control of Flexible Structures (COFS) program has supported the development of an analysis capability at the Langley Research Center called the Integrated Multidisciplinary Analysis Tool (IMAT) which provides an efficient data storage and transfer capability among commercial computer codes to aid in the dynamic analysis of actively controlled structures. IMAT is a system of computer programs which transfers Computer-Aided-Design (CAD) configurations, structural finite element models, material property and stress information, structural and rigid-body dynamic model information, and linear system matrices for control law formulation among various commercial applications programs through a common database. Although general in its formulation, IMAT was developed specifically to aid in the evaluation of the structures. A description of the IMAT system and results of an application of the system are given.

  19. Multibody dynamic simulation of knee contact mechanics

    PubMed Central

    Bei, Yanhong; Fregly, Benjamin J.

    2006-01-01

    Multibody dynamic musculoskeletal models capable of predicting muscle forces and joint contact pressures simultaneously would be valuable for studying clinical issues related to knee joint degeneration and restoration. Current three-dimensional multi-body knee models are either quasi-static with deformable contact or dynamic with rigid contact. This study proposes a computationally efficient methodology for combining multibody dynamic simulation methods with a deformable contact knee model. The methodology requires preparation of the articular surface geometry, development of efficient methods to calculate distances between contact surfaces, implementation of an efficient contact solver that accounts for the unique characteristics of human joints, and specification of an application programming interface for integration with any multibody dynamic simulation environment. The current implementation accommodates natural or artificial tibiofemoral joint models, small or large strain contact models, and linear or nonlinear material models. Applications are presented for static analysis (via dynamic simulation) of a natural knee model created from MRI and CT data and dynamic simulation of an artificial knee model produced from manufacturer’s CAD data. Small and large strain natural knee static analyses required 1 min of CPU time and predicted similar contact conditions except for peak pressure, which was higher for the large strain model. Linear and nonlinear artificial knee dynamic simulations required 10 min of CPU time and predicted similar contact force and torque but different contact pressures, which were lower for the nonlinear model due to increased contact area. This methodology provides an important step toward the realization of dynamic musculoskeletal models that can predict in vivo knee joint motion and loading simultaneously. PMID:15564115

  20. Simulation of cooperating robot manipulators on a mobile platform

    NASA Technical Reports Server (NTRS)

    Murphy, Steve H.; Wen, John T.; Saridis, George N.

    1990-01-01

    The dynamic equations of motion for two manipulators holding a common object on a freely moving mobile platform are developed. The full dynamic interactions from arms to platform and arm-tip to arm-tip are included in the formulation. The development of the closed chain dynamics allows for the use of any solution for the open topological tree of base and manipulator links. In particular, because the system has 18 degrees of freedom, recursive solutions for the dynamic simulation become more promising for efficient calculations of the motion. Simulation of the system is accomplished through a MATLAB program, and the response is visualized graphically using the SILMA Cimstation.

  1. Programmed coherent coupling in a synthetic DNA-based excitonic circuit

    NASA Astrophysics Data System (ADS)

    Boulais, Étienne; Sawaya, Nicolas P. D.; Veneziano, Rémi; Andreoni, Alessio; Banal, James L.; Kondo, Toru; Mandal, Sarthak; Lin, Su; Schlau-Cohen, Gabriela S.; Woodbury, Neal W.; Yan, Hao; Aspuru-Guzik, Alán; Bathe, Mark

    2018-02-01

    Natural light-harvesting systems spatially organize densely packed chromophore aggregates using rigid protein scaffolds to achieve highly efficient, directed energy transfer. Here, we report a synthetic strategy using rigid DNA scaffolds to similarly program the spatial organization of densely packed, discrete clusters of cyanine dye aggregates with tunable absorption spectra and strongly coupled exciton dynamics present in natural light-harvesting systems. We first characterize the range of dye-aggregate sizes that can be templated spatially by A-tracts of B-form DNA while retaining coherent energy transfer. We then use structure-based modelling and quantum dynamics to guide the rational design of higher-order synthetic circuits consisting of multiple discrete dye aggregates within a DX-tile. These programmed circuits exhibit excitonic transport properties with prominent circular dichroism, superradiance, and fast delocalized exciton transfer, consistent with our quantum dynamics predictions. This bottom-up strategy offers a versatile approach to the rational design of strongly coupled excitonic circuits using spatially organized dye aggregates for use in coherent nanoscale energy transport, artificial light-harvesting, and nanophotonics.

  2. A Brownian dynamics program for the simulation of linear and circular DNA and other wormlike chain polyelectrolytes.

    PubMed Central

    Klenin, K; Merlitz, H; Langowski, J

    1998-01-01

    For the interpretation of solution structural and dynamic data of linear and circular DNA molecules in the kb range, and for the prediction of the effect of local structural changes on the global conformation of such DNAs, we have developed an efficient and easy way to set up a program based on a second-order explicit Brownian dynamics algorithm. The DNA is modeled by a chain of rigid segments interacting through harmonic spring potentials for bending, torsion, and stretching. The electrostatics are handled using precalculated energy tables for the interactions between DNA segments as a function of relative orientation and distance. Hydrodynamic interactions are treated using the Rotne-Prager tensor. While maintaining acceptable precision, the simulation can be accelerated by recalculating this tensor only once in a certain number of steps. PMID:9533691

  3. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  4. Low-current traveling wave tube for use in the microwave power module

    NASA Technical Reports Server (NTRS)

    Palmer, Raymond W.; Ramins, Peter; Force, Dale A.; Dayton, James A.; Ebihara, Ben T.; Gruber, Robert P.

    1993-01-01

    The results of a traveling-wave-tube/multistage depressed-collector (TWT-MDC) design study in support of the Advanced Research Projects Agency/Department of Defense (ARPA/DOD) Microwave Power Module (MPM) Program are described. The study stressed the possible application of dynamic and other tapers to the RF output circuit of the MPM traveling wave tube as a means of increasing the RF and overall efficiencies and reducing the required beam current (perveance). The results indicate that a highly efficient, modified dynamic velocity taper (DVT) circuit can be designed for the broadband MPM application. The combination of reduced cathode current (lower perveance) and increased RF efficiency leads to (1) a substantially higher overall efficiency and reduction in the prime power to the MPM, and (2) substantially reduced levels of MDC and MPM heat dissipation, which simplify the cooling problems. However, the selected TWT circuit parameters need to be validated by cold test measurements on actual circuits.

  5. Numerical simulation of hydrogen fluorine overtone chemical lasers

    NASA Astrophysics Data System (ADS)

    Chen, Jinbao; Jiang, Zhongfu; Hua, Weihong; Liu, Zejin; Shu, Baihong

    1998-08-01

    A two-dimensional program was applied to simulate the chemical dynamic process, gas dynamic process and lasing process of a combustion-driven CW HF overtone chemical lasers. Some important parameters in the cavity were obtained. The calculated results included HF molecule concentration on each vibration energy level while lasing, averaged pressure and temperature, zero power gain coefficient of each spectral line, laser spectrum, the averaged laser intensity, output power, chemical efficiency and the length of lasing zone.

  6. A variational dynamic programming approach to robot-path planning with a distance-safety criterion

    NASA Technical Reports Server (NTRS)

    Suh, Suk-Hwan; Shin, Kang G.

    1988-01-01

    An approach to robot-path planning is developed by considering both the traveling distance and the safety of the robot. A computationally-efficient algorithm is developed to find a near-optimal path with a weighted distance-safety criterion by using a variational calculus and dynamic programming (VCDP) method. The algorithm is readily applicable to any factory environment by representing the free workspace as channels. A method for deriving these channels is also proposed. Although it is developed mainly for two-dimensional problems, this method can be easily extended to a class of three-dimensional problems. Numerical examples are presented to demonstrate the utility and power of this method.

  7. Dynamic Positioning Capability Analysis for Marine Vessels Based on A DPCap Polar Plot Program

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Yang, Jian-min; Xu, Sheng-wen

    2018-03-01

    Dynamic positioning capability (DPCap) analysis is essential in the selection of thrusters, in their configuration, and during preliminary investigation of the positioning ability of a newly designed vessel dynamic positioning system. DPCap analysis can help determine the maximum environmental forces, in which the DP system can counteract in given headings. The accuracy of the DPCap analysis is determined by the precise estimation of the environmental forces as well as the effectiveness of the thrust allocation logic. This paper is dedicated to developing an effective and efficient software program for the DPCap analysis for marine vessels. Estimation of the environmental forces can be obtained by model tests, hydrodynamic computation and empirical formulas. A quadratic programming method is adopted to allocate the total thrust on every thruster of the vessel. A detailed description of the thrust allocation logic of the software program is given. The effectiveness of the new program DPCap Polar Plot (DPCPP) was validated by a DPCap analysis for a supply vessel. The present study indicates that the developed program can be used in the DPCap analysis for marine vessels. Moreover, DPCap analysis considering the thruster failure mode might give guidance to the designers of vessels whose thrusters need to be safer.

  8. Public policies, private choices: Consumer desire and the practice of energy efficiency

    NASA Astrophysics Data System (ADS)

    Deumling, Reuben Alexander

    Refrigerator energy consumption has been the subject of regulatory attention in the US for some thirty years. Federal product standards, energy labels, and a variety of programs to get consumers to discard their existing refrigerators sooner and buy new, more energy efficient ones have transformed the refrigerator landscape and changed how many of us think about refrigerators. The results of these policies are celebrated as a successful model for how to combine regulatory objectives and consumer preferences in pursuit of environmental outcomes where everyone wins. Yet per capita refrigerator energy consumption today remains (much) higher in the US than anywhere else, in part because energy efficiency overlooks the ways behavior, habit, emulation, social norms, advertising, and energy efficiency policies themselves shape energy consumption patterns. To understand these dynamics I investigate how people replacing their refrigerators through a state-sponsored energy efficiency program make sense of the choices facing them, and how various types of information designed to aid in this process (Consumer Reports tests, Energy Guide labels, rebate programs) frame the issue of responsible refrigerator consumption. Using interviews and archival research I examine how this information is used to script the choice of a refrigerator, whose priorities shape the form and content of these cues, and what the social meanings generated by and through encounters with refrigerators and energy efficiency are. I also helped build a model for estimating historic refrigerator energy consumption in the US, to measure the repercussions of refrigerator energy inefficiency. My focus in this dissertation is on the ways the pursuit of energy efficiency improvements for domestic refrigerators intersects with and sometimes reinforces escalating demand for energy. My research suggests that the practice of pursuing energy efficiency improvements in refrigerators subordinates the issue of refrigerator energy consumption---what factors influence it, how and why it fluctuated historically, how to take it seriously---in pursuit of increased sales. The a priori assumption that consumers desire certain styles of refrigerator has become a compulsion to trade up. In evaluating the results of energy policies celebrating technical achievements without paying attention to the social dynamics which these regulations encounter is insufficient.

  9. Cell-Free Optogenetic Gene Expression System.

    PubMed

    Jayaraman, Premkumar; Yeoh, Jing Wui; Jayaraman, Sudhaghar; Teh, Ai Ying; Zhang, Jingyun; Poh, Chueh Loo

    2018-04-20

    Optogenetic tools provide a new and efficient way to dynamically program gene expression with unmatched spatiotemporal precision. To date, their vast potential remains untapped in the field of cell-free synthetic biology, largely due to the lack of simple and efficient light-switchable systems. Here, to bridge the gap between cell-free systems and optogenetics, we studied our previously engineered one component-based blue light-inducible Escherichia coli promoter in a cell-free environment through experimental characterization and mathematical modeling. We achieved >10-fold dynamic expression and demonstrated rapid and reversible activation of the target gene to generate oscillatory response. The deterministic model developed was able to recapitulate the system behavior and helped to provide quantitative insights to optimize dynamic response. This in vitro optogenetic approach could be a powerful new high-throughput screening technology for rapid prototyping of complex biological networks in both space and time without the need for chemical induction.

  10. Reproducing Quantum Probability Distributions at the Speed of Classical Dynamics: A New Approach for Developing Force-Field Functors.

    PubMed

    Sundar, Vikram; Gelbwaser-Klimovsky, David; Aspuru-Guzik, Alán

    2018-04-05

    Modeling nuclear quantum effects is required for accurate molecular dynamics (MD) simulations of molecules. The community has paid special attention to water and other biomolecules that show hydrogen bonding. Standard methods of modeling nuclear quantum effects like Ring Polymer Molecular Dynamics (RPMD) are computationally costlier than running classical trajectories. A force-field functor (FFF) is an alternative method that computes an effective force field that replicates quantum properties of the original force field. In this work, we propose an efficient method of computing FFF using the Wigner-Kirkwood expansion. As a test case, we calculate a range of thermodynamic properties of Neon, obtaining the same level of accuracy as RPMD, but with the shorter runtime of classical simulations. By modifying existing MD programs, the proposed method could be used in the future to increase the efficiency and accuracy of MD simulations involving water and proteins.

  11. Hypersonic research at Stanford University

    NASA Technical Reports Server (NTRS)

    Candler, Graham; Maccormack, Robert

    1988-01-01

    The status of the hypersonic research program at Stanford University is discussed and recent results are highlighted. The main areas of interest in the program are the numerical simulation of radiating, reacting and thermally excited flows, the investigation and numerical solution of hypersonic shock wave physics, the extension of the continuum fluid dynamic equations to the transition regime between continuum and free-molecule flow, and the development of novel numerical algorithms for efficient particulate simulations of flowfields.

  12. Probabilistic methods for rotordynamics analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  13. Upper Limits for Power Yield in Thermal, Chemical, and Electrochemical Systems

    NASA Astrophysics Data System (ADS)

    Sieniutycz, Stanislaw

    2010-03-01

    We consider modeling and power optimization of energy converters, such as thermal, solar and chemical engines and fuel cells. Thermodynamic principles lead to expressions for converter's efficiency and generated power. Efficiency equations serve to solve the problems of upgrading or downgrading a resource. Power yield is a cumulative effect in a system consisting of a resource, engines, and an infinite bath. While optimization of steady state systems requires using the differential calculus and Lagrange multipliers, dynamic optimization involves variational calculus and dynamic programming. The primary result of static optimization is the upper limit of power, whereas that of dynamic optimization is a finite-rate counterpart of classical reversible work (exergy). The latter quantity depends on the end state coordinates and a dissipation index, h, which is the Hamiltonian of the problem of minimum entropy production. In reacting systems, an active part of chemical affinity constitutes a major component of the overall efficiency. The theory is also applied to fuel cells regarded as electrochemical flow engines. Enhanced bounds on power yield follow, which are stronger than those predicted by the reversible work potential.

  14. Status of the NASA YF-12 Propulsion Research Program

    NASA Technical Reports Server (NTRS)

    Albers, J. A.

    1976-01-01

    The YF-12 research program was initiated to establish a technology base for the design of an efficient propulsion system for supersonic cruise aircraft. The major technology areas under investigation in this program are inlet design analysis, propulsion system steady-state performance, propulsion system dynamic performance, inlet and engine control systems, and airframe/propulsion system interactions. The objectives, technical approach, and status of the YF-12 propulsion program are discussed. Also discussed are the results obtained to date by the NASA Ames, Lewis, and Dryden research centers. The expected technical results and proposed future programs are also given. Propulsion system configurations are shown.

  15. Analysis and Modeling of Ground Operations at Hub Airports

    NASA Technical Reports Server (NTRS)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  16. MIUS integration and subsystems test program

    NASA Technical Reports Server (NTRS)

    Beckham, W. S., Jr.; Shows, G. C.; Redding, T. E.; Wadle, R. C.; Keough, M. B.; Poradek, J. C.

    1976-01-01

    The MIUS Integration and Subsystems Test (MIST) facility at the Lyndon B. Johnson Space Center was completed and ready in May 1974 for conducting specific tests in direct support of the Modular Integrated Utility System (MIUS). A series of subsystems and integrated tests was conducted since that time, culminating in a series of 24-hour dynamic tests to further demonstrate the capabilities of the MIUS Program concepts to meet typical utility load profiles for a residential area. Results of the MIST Program are presented which achieved demonstrated plant thermal efficiencies ranging from 57 to 65 percent.

  17. Spatial operator factorization and inversion of the manipulator mass matrix

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo; Kreutz-Delgado, Kenneth

    1992-01-01

    This paper advances two linear operator factorizations of the manipulator mass matrix. Embedded in the factorizations are many of the techniques that are regarded as very efficient computational solutions to inverse and forward dynamics problems. The operator factorizations provide a high-level architectural understanding of the mass matrix and its inverse, which is not visible in the detailed algorithms. They also lead to a new approach to the development of computer programs or organize complexity in robot dynamics.

  18. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids.

    PubMed

    Aradi, Bálint; Niklasson, Anders M N; Frauenheim, Thomas

    2015-07-14

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born-Oppenheimer molecular dynamics. For systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can be applied to a broad range of problems in materials science, chemistry, and biology.

  19. ReaDDy - A Software for Particle-Based Reaction-Diffusion Dynamics in Crowded Cellular Environments

    PubMed Central

    Schöneberg, Johannes; Noé, Frank

    2013-01-01

    We introduce the software package ReaDDy for simulation of detailed spatiotemporal mechanisms of dynamical processes in the cell, based on reaction-diffusion dynamics with particle resolution. In contrast to other particle-based reaction kinetics programs, ReaDDy supports particle interaction potentials. This permits effects such as space exclusion, molecular crowding and aggregation to be modeled. The biomolecules simulated can be represented as a sphere, or as a more complex geometry such as a domain structure or polymer chain. ReaDDy bridges the gap between small-scale but highly detailed molecular dynamics or Brownian dynamics simulations and large-scale but little-detailed reaction kinetics simulations. ReaDDy has a modular design that enables the exchange of the computing core by efficient platform-specific implementations or dynamical models that are different from Brownian dynamics. PMID:24040218

  20. Partial Data Traces: Efficient Generation and Representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, F; De Supinski, B R; McKee, S A

    2001-08-20

    Binary manipulation techniques are increasing in popularity. They support program transformations tailored toward certain program inputs, and these transformations have been shown to yield performance gains beyond the scope of static code optimizations without profile-directed feedback. They even deliver moderate gains in the presence of profile-guided optimizations. In addition, transformations can be performed on the entire executable, including library routines. This work focuses on program instrumentation, yet another application of binary manipulation. This paper reports preliminary results on generating partial data traces through dynamic binary rewriting. The contributions are threefold. First, a portable method for extracting precise data traces formore » partial executions of arbitrary applications is developed. Second, a set of hierarchical structures for compactly representing these accesses is developed. Third, an efficient online algorithm to detect regular accesses is introduced. The authors utilize dynamic binary rewriting to selectively collect partial address traces of regions within a program. This allows partial tracing of hot paths for only a short time during program execution in contrast to static rewriting techniques that lack hot path detection and also lack facilities to limit the duration of data collection. Preliminary results show reductions of three orders of a magnitude of inline instrumentation over a dual process approach involving context switching. They also report constant size representations for regular access patters in nested loops. These efforts are part of a larger project to counter the increasing gap between processor and main memory speeds by means of software optimization and hardware enhancements.« less

  1. Inflated speedups in parallel simulations via malloc()

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    Discrete-event simulation programs make heavy use of dynamic memory allocation in order to support simulation's very dynamic space requirements. When programming in C one is likely to use the malloc() routine. However, a parallel simulation which uses the standard Unix System V malloc() implementation may achieve an overly optimistic speedup, possibly superlinear. An alternate implementation provided on some (but not all systems) can avoid the speedup anomaly, but at the price of significantly reduced available free space. This is especially severe on most parallel architectures, which tend not to support virtual memory. It is shown how a simply implemented user-constructed interface to malloc() can both avoid artificially inflated speedups, and make efficient use of the dynamic memory space. The interface simply catches blocks on the basis of their size. The problem is demonstrated empirically, and the effectiveness of the solution is shown both empirically and analytically.

  2. Probabilistic fusion of stereo with color and contrast for bilayer segmentation.

    PubMed

    Kolmogorov, Vladimir; Criminisi, Antonio; Blake, Andrew; Cross, Geoffrey; Rother, Carsten

    2006-09-01

    This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Dynamic Programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, Layered Graph Cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/ contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.

  3. Patents, Innovation, and the Welfare Effects of Medicare Part D*

    PubMed Central

    Gailey, Adam; Lakdawalla, Darius; Sood, Neeraj

    2013-01-01

    Purpose To evaluate the efficiency consequences of the Medicare Part D program. Methods We develop and empirically calibrate a simple theoretical model to examine the static and dynamic welfare effects of Medicare Part D. Findings We show that Medicare Part D can simultaneously reduce static deadweight loss from monopoly pricing of drugs and improve incentives for innovation. We estimate that even after excluding the insurance value of the program, the welfare gain of Medicare Part D roughly equals its social costs. The program generates $5.11 billion of annual static deadweight loss reduction, and at least $3.0 billion of annual value from extra innovation. Implications Medicare Part D and other public prescription drug programs can be welfare-improving, even for risk-neutral and purely self-interested consumers. Furthermore, negotiation for lower branded drug prices may further increase the social return to the program. Originality This study demonstrates that pure efficiency motives, which do not even surface in the policy debate over Medicare Part D, can nearly justify the program on their own merits. PMID:20575239

  4. Automatic programming via iterated local search for dynamic job shop scheduling.

    PubMed

    Nguyen, Su; Zhang, Mengjie; Johnston, Mark; Tan, Kay Chen

    2015-01-01

    Dispatching rules have been commonly used in practice for making sequencing and scheduling decisions. Due to specific characteristics of each manufacturing system, there is no universal dispatching rule that can dominate in all situations. Therefore, it is important to design specialized dispatching rules to enhance the scheduling performance for each manufacturing environment. Evolutionary computation approaches such as tree-based genetic programming (TGP) and gene expression programming (GEP) have been proposed to facilitate the design task through automatic design of dispatching rules. However, these methods are still limited by their high computational cost and low exploitation ability. To overcome this problem, we develop a new approach to automatic programming via iterated local search (APRILS) for dynamic job shop scheduling. The key idea of APRILS is to perform multiple local searches started with programs modified from the best obtained programs so far. The experiments show that APRILS outperforms TGP and GEP in most simulation scenarios in terms of effectiveness and efficiency. The analysis also shows that programs generated by APRILS are more compact than those obtained by genetic programming. An investigation of the behavior of APRILS suggests that the good performance of APRILS comes from the balance between exploration and exploitation in its search mechanism.

  5. Incentive pricing and cost recovery at the basin scale.

    PubMed

    Ward, Frank A; Pulido-Velazquez, Manuel

    2009-01-01

    Incentive pricing programs have potential to promote economically efficient water use patterns and provide a revenue source to compensate for environmental damages. However, incentive pricing may impose disproportionate costs and aggravate poverty where high prices are levied for basic human needs. This paper presents an analysis of a two-tiered water pricing system that sets a low price for subsistence needs, while charging a price equal to marginal cost, including environmental cost, for discretionary uses. This pricing arrangement can promote efficient and sustainable water use patterns, goals set by the European Water Framework Directive, while meeting subsistence needs of poor households. Using data from the Rio Grande Basin of North America, a dynamic nonlinear program, maximizes the basin's total net economic and environmental benefits subject to several hydrological and institutional constraints. Supply costs, environmental costs, and resource costs are integrated in a model of a river basin's hydrology, economics, and institutions. Three programs are compared: (1) Law of the River, in which water allocations and prices are determined by rules governing water transfers; (2) marginal cost pricing, in which households pay the full marginal cost of supplying treated water; (3) two-tiered pricing, in which households' subsistence water needs are priced cheaply, while discretionary uses are priced at efficient levels. Compared to the Law of the River and marginal cost pricing, two-tiered pricing performs well for efficiency and adequately for sustainability and equity. Findings provide a general framework for formulating water pricing programs that promote economically and environmentally efficient water use programs while also addressing other policy goals.

  6. Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.

    PubMed

    Newberg, Lee A

    2008-08-15

    A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.

  7. Dynamic programming on a shared-memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Edmonds, Phil; Chu, Eleanor; George, Alan

    1993-01-01

    Three new algorithms for solving dynamic programming problems on a shared-memory parallel computer are described. All three algorithms attempt to balance work load, while keeping synchronization cost low. In particular, for a multiprocessor having p processors, an analysis of the best algorithm shows that the arithmetic cost is O(n-cubed/6p) and that the synchronization cost is O(absolute value of log sub C n) if p much less than n, where C = (2p-1)/(2p + 1) and n is the size of the problem. The low synchronization cost is important for machines where synchronization is expensive. Analysis and experiments show that the best algorithm is effective in balancing the work load and producing high efficiency.

  8. Making Online Learning Accessible for Students with Disabilities

    ERIC Educational Resources Information Center

    Hashey, Andrew I.; Stahl, Skip

    2014-01-01

    The growing presence of K-12 online education programs is a trend that promises to increase flexibility, improve efficiency, and foster engagement in learning. Students with disabilities can benefit from dynamic online educational environments, but only to the extent that they can access and participate in the learning process. As students with…

  9. Dynamic network data envelopment analysis for university hospitals evaluation

    PubMed Central

    Lobo, Maria Stella de Castro; Rodrigues, Henrique de Castro; André, Edgard Caires Gazzola; de Azeredo, Jônatas Almeida; Lins, Marcos Pereira Estellita

    2016-01-01

    ABSTRACT OBJECTIVE To develop an assessment tool to evaluate the efficiency of federal university general hospitals. METHODS Data envelopment analysis, a linear programming technique, creates a best practice frontier by comparing observed production given the amount of resources used. The model is output-oriented and considers variable returns to scale. Network data envelopment analysis considers link variables belonging to more than one dimension (in the model, medical residents, adjusted admissions, and research projects). Dynamic network data envelopment analysis uses carry-over variables (in the model, financing budget) to analyze frontier shift in subsequent years. Data were gathered from the information system of the Brazilian Ministry of Education (MEC), 2010-2013. RESULTS The mean scores for health care, teaching and research over the period were 58.0%, 86.0%, and 61.0%, respectively. In 2012, the best performance year, for all units to reach the frontier it would be necessary to have a mean increase of 65.0% in outpatient visits; 34.0% in admissions; 12.0% in undergraduate students; 13.0% in multi-professional residents; 48.0% in graduate students; 7.0% in research projects; besides a decrease of 9.0% in medical residents. In the same year, an increase of 0.9% in financing budget would be necessary to improve the care output frontier. In the dynamic evaluation, there was progress in teaching efficiency, oscillation in medical care and no variation in research. CONCLUSIONS The proposed model generates public health planning and programming parameters by estimating efficiency scores and making projections to reach the best practice frontier. PMID:27191158

  10. The new program OPAL for molecular dynamics simulations and energy refinements of biological macromolecules.

    PubMed

    Luginbühl, P; Güntert, P; Billeter, M; Wüthrich, K

    1996-09-01

    A new program for molecular dynamics (MD) simulation and energy refinement of biological macromolecules, OPAL, is introduced. Combined with the supporting program TRAJEC for the analysis of MD trajectories, OPAL affords high efficiency and flexibility for work with different force fields, and offers a user-friendly interface and extensive trajectory analysis capabilities. Salient features are computational speeds of up to 1.5 GFlops on vector supercomputers such as the NEC SX-3, ellipsoidal boundaries to reduce the system size for studies in explicit solvents, and natural treatment of the hydrostatic pressure. Practical applications of OPAL are illustrated with MD simulations of pure water, energy minimization of the NMR structure of the mixed disulfide of a mutant E. coli glutaredoxin with glutathione in different solvent models, and MD simulations of a small protein, pheromone Er-2, using either instantaneous or time-averaged NMR restraints, or no restraints.

  11. Concurrency-based approaches to parallel programming

    NASA Technical Reports Server (NTRS)

    Kale, L.V.; Chrisochoides, N.; Kohl, J.; Yelick, K.

    1995-01-01

    The inevitable transition to parallel programming can be facilitated by appropriate tools, including languages and libraries. After describing the needs of applications developers, this paper presents three specific approaches aimed at development of efficient and reusable parallel software for irregular and dynamic-structured problems. A salient feature of all three approaches in their exploitation of concurrency within a processor. Benefits of individual approaches such as these can be leveraged by an interoperability environment which permits modules written using different approaches to co-exist in single applications.

  12. Formulation and implementation of a practical algorithm for parameter estimation with process and measurement noise

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A new formulation is proposed for the problem of parameter estimation of dynamic systems with both process and measurement noise. The formulation gives estimates that are maximum likelihood asymptotically in time. The means used to overcome the difficulties encountered by previous formulations are discussed. It is then shown how the proposed formulation can be efficiently implemented in a computer program. A computer program using the proposed formulation is available in a form suitable for routine application. Examples with simulated and real data are given to illustrate that the program works well.

  13. Solar dynamic power system development for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The development of a solar dynamic electric power generation system as part of the Space Station Freedom Program is documented. The solar dynamic power system includes a solar concentrator, which collects sunlight; a receiver, which accepts and stores the concentrated solar energy and transfers this energy to a gas; a Brayton turbine, alternator, and compressor unit, which generates electric power; and a radiator, which rejects waste heat. Solar dynamic systems have greater efficiency and lower maintenance costs than photovoltaic systems and are being considered for future growth of Space Station Freedom. Solar dynamic development managed by the NASA Lewis Research Center from 1986 to Feb. 1991 is covered. It summarizes technology and hardware development, describes 'lessons learned', and, through an extensive bibliography, serves as a source list of documents that provide details of the design and analytic results achieved. It was prepared by the staff of the Solar Dynamic Power System Branch at the NASA Lewis Research Center in Cleveland, Ohio. The report includes results from the prime contractor as well as from in-house efforts, university grants, and other contracts. Also included are the writers' opinions on the best way to proceed technically and programmatically with solar dynamic efforts in the future, on the basis of their experiences in this program.

  14. Allocative and implementation efficiency in HIV prevention and treatment for people who inject drugs.

    PubMed

    Benedikt, Clemens; Kelly, Sherrie L; Wilson, David; Wilson, David P

    2016-12-01

    Estimated global new HIV infections among people who inject drugs (PWID) remained stable over the 2010-2015 period and the target of a 50% reduction over this period was missed. To achieve the 2020 UNAIDS target of reducing adult HIV infections by 75% compared to 2010, accelerated action in scaling up HIV programs for PWID is required. In a context of diminishing external support to HIV programs in countries where most HIV-affected PWID live, it is essential that available resources are allocated and used as efficiently as possible. Allocative and implementation efficiency analysis methods were applied. Optima, a dynamic, population-based HIV model with an integrated program and economic analysis framework was applied in eight countries in Eastern Europe and Central Asia (EECA). Mathematical analyses established optimized allocations of resources. An implementation efficiency analysis focused on examining technical efficiency, unit costs, and heterogeneity of service delivery models and practices. Findings from the latest reported data revealed that countries allocated between 4% (Bulgaria) and 40% (Georgia) of total HIV resources to programs targeting PWID - with a median of 13% for the eight countries. When distributing the same amount of HIV funding optimally, between 9% and 25% of available HIV resources would be allocated to PWID programs with a median allocation of 16% and, in addition, antiretroviral therapy would be scaled up including for PWID. As a result of optimized allocations, new HIV infections are projected to decline by 3-28% and AIDS-related deaths by 7-53% in the eight countries. Implementation efficiencies identified involve potential reductions in drug procurement costs, service delivery models, and practices and scale of service delivery influencing cost and outcome. A high level of implementation efficiency was associated with high volumes of PWID clients accessing a drug harm reduction facility. A combination of optimized allocation of resources, improved implementation efficiency and increased investment of non-HIV resources is required to enhance coverage and improve outcomes of programs for PWID. Increasing efficiency of HIV programs for PWID is a key step towards avoiding implicit rationing and ensuring transparent allocation of resources where and how they would have the largest impact on the health of PWID, and thereby ensuring that funding spent on PWID becomes a global best buy in public health. Copyright © 2016. Published by Elsevier B.V.

  15. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less

  16. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids

    DOE PAGES

    Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas

    2015-06-26

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less

  17. Evidence of progress. Measurement of impacts of Australia's S and L program from 1990-2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowenthal-Savy; McNeil, Michael; Harrington, Lloyd

    2013-10-15

    Australia first put categorical energy efficiency labels on residential appliances in the mid-1980s, and the first Minimum Energy Performance Standards (MEPS) for refrigerators was implemented in 1999. Updated in 2005, these MEPS were aligned with US 2001 levels. Considered together, these actions set Australia apart as having one of the most aggressive appliance efficiency programs in the world. For these reasons, together with good data on product sales over time, Australia represents a potentially fruitful case study for understanding the dynamics energy efficiency standards and labeling (EES and L) programs impacts on appliance markets. This analysis attempts to distinguish betweenmore » the impacts of labeling alone as opposed to MEPS, and to probe the time-dependency of such impacts. Fortunately, in the Australian case, detailed market sales data and a comprehensive registration system provides a solid basis for the empirical evaluation of these questions. This paper analyzes Australian refrigerator efficiency data covering the years 1993-2009. Sales data was purchased from a commercial market research organization (in this case, the GfK Group) and includes sales and average price in each year for each appliance model – this can be used to understand broader trends by product class and star rating category, even where data is aggregated. Statistical regression analysis is used to model market introduction and adoption of high efficiency refrigerators according to logistic adoption model formalism, and parameterizes the way in which the Australian programs accelerated adoption of high-efficiency products and phased out others. Through this analysis, the paper presents a detailed, robust and quantitative picture of the impacts of EES and L in the Australian case, but also demonstrates a methodology of the evaluation of program impacts that could form the basis of an international evaluation framework for similar programs in other countries.« less

  18. Evidence of Progress - Measurement of Impacts of Australia's S&L Program from 1990-2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowenthal-Savy, Danielle; McNeil, Michael; Harrington, Lloyd

    2013-09-11

    Australia first put categorical energy efficiency labels on residential appliances in the mid-1980s, and the first Minimum Energy Performance Standards (MEPS) for refrigerators was implemented in 1999. Updated in 2005, these MEPS were aligned with US 2001 levels. Considered together, these actions set Australia apart as having one of the most aggressive appliance efficiency programs in the world. For these reasons, together with good data on product sales over time, Australia represents a potentially fruitful case study for understanding the dynamics energy efficiency standards and labeling (EES&L) programs impacts on appliance markets. This analysis attempts to distinguish between the impactsmore » of labeling alone as opposed to MEPS, and to probe the time-dependency of such impacts. Fortunately, in the Australian case, detailed market sales data and a comprehensive registration system provides a solid basis for the empirical evaluation of these questions. This paper analyzes Australian refrigerator efficiency data covering the years 1993-2009. Sales data was purchased from a commercial market research organization (in this case, the GfK Group) and includes sales and average price in each year for each appliance model; this can be used to understand broader trends by product class and star rating category, even where data is aggregated. Statistical regression analysis is used to model market introduction and adoption of high efficiency refrigerators according to logistic adoption model formalism, and parameterizes the way in which the Australian programs accelerated adoption of high-efficiency products and phased out others. Through this analysis, the paper presents a detailed, robust and quantitative picture of the impacts of EES&L in the Australian case, but also demonstrates a methodology of the evaluation of program impacts that could form the basis of an international evaluation framework for similar programs in other countries.« less

  19. Jmy regulates oligodendrocyte differentiation via modulation of actin cytoskeleton dynamics.

    PubMed

    Azevedo, Maria M; Domingues, Helena S; Cordelières, Fabrice P; Sampaio, Paula; Seixas, Ana I; Relvas, João B

    2018-05-06

    During central nervous system development, oligodendrocytes form structurally and functionally distinct actin-rich protrusions that contact and wrap around axons to assemble myelin sheaths. Establishment of axonal contact is a limiting step in myelination that relies on the oligodendrocyte's ability to locally coordinate cytoskeletal rearrangements with myelin production, under the control of a transcriptional differentiation program. The molecules that provide fine-tuning of actin dynamics during oligodendrocyte differentiation and axon ensheathment remain largely unidentified. We performed transcriptomics analysis of soma and protrusion fractions from rat brain oligodendrocyte progenitors and found a subcellular enrichment of mRNAs in newly-formed protrusions. Approximately 30% of protrusion-enriched transcripts encode proteins related to cytoskeleton dynamics, including the junction mediating and regulatory protein Jmy, a multifunctional regulator of actin polymerization. Here, we show that expression of Jmy is upregulated during myelination and is required for the assembly of actin filaments and protrusion formation during oligodendrocyte differentiation. Quantitative morphodynamics analysis of live oligodendrocytes showed that differentiation is driven by a stereotypical actin network-dependent "cellular shaping" program. Disruption of actin dynamics via knockdown of Jmy leads to a program fail resulting in oligodendrocytes that do not acquire an arborized morphology and are less efficient in contacting neurites and forming myelin wraps in co-cultures with neurons. Our findings provide new mechanistic insight into the relationship between cell shape dynamics and differentiation in development. © 2018 Wiley Periodicals, Inc.

  20. Computer Program for the Design and Off-Design Performance of Turbojet and Turbofan Engine Cycles

    NASA Technical Reports Server (NTRS)

    Morris, S. J.

    1978-01-01

    The rapid computer program is designed to be run in a stand-alone mode or operated within a larger program. The computation is based on a simplified one-dimensional gas turbine cycle. Each component in the engine is modeled thermo-dynamically. The component efficiencies used in the thermodynamic modeling are scaled for the off-design conditions from input design point values using empirical trends which are included in the computer code. The engine cycle program is capable of producing reasonable engine performance prediction with a minimum of computer execute time. The current computer execute time on the IBM 360/67 for one Mach number, one altitude, and one power setting is about 0.1 seconds. about 0.1 seconds. The principal assumption used in the calculation is that the compressor is operated along a line of maximum adiabatic efficiency on the compressor map. The fluid properties are computed for the combustion mixture, but dissociation is not included. The procedure included in the program is only for the combustion of JP-4, methane, or hydrogen.

  1. Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.

    PubMed

    van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim

    2018-05-21

    Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.

  2. Improvement of the Performance of an Electrocoagulation Process System Using Fuzzy Control of pH.

    PubMed

    Demirci, Yavuz; Pekel, Lutfiye Canan; Altinten, Ayla; Alpbaz, Mustafa

    2015-12-01

    The removal efficiencies of electrocoagulation (EC) systems are highly dependent on the initial value of pH. If an EC system has an acidic influent, the pH of the effluent increases during the treatment process; conversely, if such a system has an alkaline influent, the pH of the effluent decreases during the treatment process. Thus, changes in the pH of the wastewater affect the efficiency of the EC process. In this study, we investigated the dynamic effects of pH. To evaluate approaches for preventing increases in the pH of the system, the MATLAB/Simulink program was used to develop and evaluate an on-line computer-based system for pH control. The aim of this work was to study Proportional-Integral-Derivative (PID) control and fuzzy control of the pH of a real textile wastewater purification process using EC. The performances and dynamic behaviors of these two control systems were evaluated based on determinations of COD, colour, and turbidity removal efficiencies.

  3. Lecture notes in economics and mathematical system. Volume 150: Supercritical wing sections 3

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Garabedian, P.; Korn, D.

    1977-01-01

    Application of computational fluid dynamics to the design and analysis of supercritical wing sections is discussed. Computer programs used to study the flight of modern aircraft at high subsonic speeds are listed and described. The cascades of shockless transonic airfoils that are expected to increase the efficiency of compressors and turbines are included.

  4. A Branch-and-Bound Algorithm for Fitting Anti-Robinson Structures to Symmetric Dissimilarity Matrices.

    ERIC Educational Resources Information Center

    Brusco, Michael J.

    2002-01-01

    Developed a branch-and-bound algorithm that can be used to seriate a symmetric dissimilarity matrix by identifying a reordering of rows and columns of the matrix optimizing an anti-Robinson criterion. Computational results suggest that with respect to computational efficiency, the approach is generally competitive with dynamic programming. (SLD)

  5. Planar Inlet Design and Analysis Process (PINDAP)

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Gruber, Christopher R.

    2005-01-01

    The Planar Inlet Design and Analysis Process (PINDAP) is a collection of software tools that allow the efficient aerodynamic design and analysis of planar (two-dimensional and axisymmetric) inlets. The aerodynamic analysis is performed using the Wind-US computational fluid dynamics (CFD) program. A major element in PINDAP is a Fortran 90 code named PINDAP that can establish the parametric design of the inlet and efficiently model the geometry and generate the grid for CFD analysis with design changes to those parameters. The use of PINDAP is demonstrated for subsonic, supersonic, and hypersonic inlets.

  6. Optimal bipedal interactions with dynamic terrain: synthesis and analysis via nonlinear programming

    NASA Astrophysics Data System (ADS)

    Hubicki, Christian; Goldman, Daniel; Ames, Aaron

    In terrestrial locomotion, gait dynamics and motor control behaviors are tuned to interact efficiently and stably with the dynamics of the terrain (i.e. terradynamics). This controlled interaction must be particularly thoughtful in bipeds, as their reduced contact points render them highly susceptible to falls. While bipedalism under rigid terrain assumptions is well-studied, insights for two-legged locomotion on soft terrain, such as sand and dirt, are comparatively sparse. We seek an understanding of how biological bipeds stably and economically negotiate granular media, with an eye toward imbuing those abilities in bipedal robots. We present a trajectory optimization method for controlled systems subject to granular intrusion. By formulating a large-scale nonlinear program (NLP) with reduced-order resistive force theory (RFT) models and jamming cone dynamics, the optimized motions are informed and shaped by the dynamics of the terrain. Using a variant of direct collocation methods, we can express all optimization objectives and constraints in closed-form, resulting in rapid solving by standard NLP solvers, such as IPOPT. We employ this tool to analyze emergent features of bipedal locomotion in granular media, with an eye toward robotic implementation.

  7. Trajectory NG: portable, compressed, general molecular dynamics trajectories.

    PubMed

    Spångberg, Daniel; Larsson, Daniel S D; van der Spoel, David

    2011-10-01

    We present general algorithms for the compression of molecular dynamics trajectories. The standard ways to store MD trajectories as text or as raw binary floating point numbers result in very large files when efficient simulation programs are used on supercomputers. Our algorithms are based on the observation that differences in atomic coordinates/velocities, in either time or space, are generally smaller than the absolute values of the coordinates/velocities. Also, it is often possible to store values at a lower precision. We apply several compression schemes to compress the resulting differences further. The most efficient algorithms developed here use a block sorting algorithm in combination with Huffman coding. Depending on the frequency of storage of frames in the trajectory, either space, time, or combinations of space and time differences are usually the most efficient. We compare the efficiency of our algorithms with each other and with other algorithms present in the literature for various systems: liquid argon, water, a virus capsid solvated in 15 mM aqueous NaCl, and solid magnesium oxide. We perform tests to determine how much precision is necessary to obtain accurate structural and dynamic properties, as well as benchmark a parallelized implementation of the algorithms. We obtain compression ratios (compared to single precision floating point) of 1:3.3-1:35 depending on the frequency of storage of frames and the system studied.

  8. A Dynamic Scheduling Method of Earth-Observing Satellites by Employing Rolling Horizon Strategy

    PubMed Central

    Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma

    2013-01-01

    Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments. PMID:23690742

  9. A dynamic scheduling method of Earth-observing satellites by employing rolling horizon strategy.

    PubMed

    Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma

    2013-01-01

    Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments.

  10. Programming the Navier-Stokes computer: An abstract machine model and a visual editor

    NASA Technical Reports Server (NTRS)

    Middleton, David; Crockett, Tom; Tomboulian, Sherry

    1988-01-01

    The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.

  11. Software For Fault-Tree Diagnosis Of A System

    NASA Technical Reports Server (NTRS)

    Iverson, Dave; Patterson-Hine, Ann; Liao, Jack

    1993-01-01

    Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.

  12. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  13. Emergency Food Assistance in Northern Syria: An Evaluation of Transfer Programs in Idleb Governorate.

    PubMed

    Doocy, Shannon; Tappis, Hannah; Lyles, Emily; Witiw, Joseph; Aken, Vicki

    2017-06-01

    The war in Syria has left millions struggling to survive amidst violent conflict, pervasive unemployment, and food insecurity. Although international assistance funding is also at an all-time high, it is insufficient to meet the needs of conflict-affected populations, and there is increasing pressure on humanitarian stakeholders to find more efficient, effective ways to provide assistance. To evaluate 3 different assistance programs (in-kind food commodities, food vouchers, and unrestricted vouchers) in Idleb Governorate of Syria from December 2014 and March 2015. The evaluation used repeated survey data from beneficiary households to determine whether assistance was successful in maintaining food security at the household level. Shopkeeper surveys and program monitoring data were used to assess the impact on markets at the district/governorate levels and compare the cost-efficiency and cost-effectiveness of transfer modalities. Both in-kind food assistance and voucher programs showed positive effects on household food security and economic measures in Idleb; however, no intervention was successful in improving all outcomes measured. Food transfers were more likely to improve food access and food security than vouchers and unrestricted vouchers. Voucher programs were found to be more cost-efficient than in-kind food assistance, and more cost-effective for increasing household food consumption. Continuation of multiple types of transfer programs, including both in-kind assistance and vouchers, will allow humanitarian actors to remain responsive to evolving access and security considerations, local needs, and market dynamics.

  14. Robotic joint experiments under ultravacuum

    NASA Technical Reports Server (NTRS)

    Borrien, A.; Petitjean, L.

    1988-01-01

    First, various aspects of a robotic joint development program, including gearbox technology, electromechanical components, lubrication, and test results, are discussed. Secondly, a test prototype of the joint allowing simulation of robotic arm dynamic effects is presented. This prototype is tested under vacuum with different types of motors and sensors to characterize the functional parameters: angular position error, mechanical backlash, gearbox efficiency, and lifetime.

  15. Assisting People with Disabilities Improves Their Collaborative Pointing Efficiency through the Use of the Mouse Scroll Wheel

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang

    2013-01-01

    This study provided that people with multiple disabilities can have a collaborative working chance in computer operations through an Enhanced Multiple Cursor Dynamic Pointing Assistive Program (EMCDPAP, a new kind of software that replaces the standard mouse driver, changes a mouse wheel into a thumb/finger poke detector, and manages mouse…

  16. Computational strategies for three-dimensional flow simulations on distributed computer systems. Ph.D. Thesis Semiannual Status Report, 15 Aug. 1993 - 15 Feb. 1994

    NASA Technical Reports Server (NTRS)

    Weed, Richard Allen; Sankar, L. N.

    1994-01-01

    An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.

  17. Nonlinear histogram binning for quantitative analysis of lung tissue fibrosis in high-resolution CT data

    NASA Astrophysics Data System (ADS)

    Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.

    2007-03-01

    Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.

  18. Spectrum-efficient multipath provisioning with content connectivity for the survivability of elastic optical datacenter networks

    NASA Astrophysics Data System (ADS)

    Gao, Tao; Li, Xin; Guo, Bingli; Yin, Shan; Li, Wenzhe; Huang, Shanguo

    2017-07-01

    Multipath provisioning is a survivable and resource efficient solution against increasing link failures caused by natural or man-made disasters in elastic optical datacenter networks (EODNs). Nevertheless, the conventional multipath provisioning scheme is designed only for connecting a specific node pair. Also, it is obvious that the number of node-disjoint paths between any two nodes is restricted to network connectivity, which has a fixed value for a given topology. Recently, the concept of content connectivity in EODNs has been proposed, which guarantees that a user can be served by any datacenter hosting the required content regardless of where it is located. From this new perspective, we propose a survivable multipath provisioning with content connectivity (MPCC) scheme, which is expected to improve the spectrum efficiency and the whole system survivability. We formulate the MPCC scheme with Integer Linear Program (ILP) in static traffic scenario and a heuristic approach is proposed for dynamic traffic scenario. Furthermore, to adapt MPCC to the variation of network state in dynamic traffic scenario, we propose a dynamic content placement (DCP) strategy in the MPCC scheme for detecting the variation of the distribution of user requests and adjusting the content location dynamically. Simulation results indicate that the MPCC scheme can reduce over 20% spectrum consumption than conventional multipath provisioning scheme in static traffic scenario. And in dynamic traffic scenario, the MPCC scheme can reduce over 20% spectrum consumption and over 50% blocking probability than conventional multipath provisioning scheme. Meanwhile, benefiting from the DCP strategy, the MPCC scheme has a good adaption to the variation of the distribution of user requests.

  19. Environmental research program. 1995 Annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, N.J.

    1996-06-01

    The objective of the Environmental Research Program is to enhance the understanding of, and mitigate the effects of pollutants on health, ecological systems, global and regional climate, and air quality. The program is multidisciplinary and includes fundamental research and development in efficient and environmentally benign combustion, pollutant abatement and destruction, and novel methods of detection and analysis of criteria and noncriteria pollutants. This diverse group conducts investigations in combustion, atmospheric and marine processes, flue-gas chemistry, and ecological systems. Combustion chemistry research emphasizes modeling at microscopic and macroscopic scales. At the microscopic scale, functional sensitivity analysis is used to explore themore » nature of the potential-to-dynamics relationships for reacting systems. Rate coefficients are estimated using quantum dynamics and path integral approaches. At the macroscopic level, combustion processes are modelled using chemical mechanisms at the appropriate level of detail dictated by the requirements of predicting particular aspects of combustion behavior. Parallel computing has facilitated the efforts to use detailed chemistry in models of turbulent reacting flow to predict minor species concentrations.« less

  20. BEST3D user's manual: Boundary Element Solution Technology, 3-Dimensional Version 3.0

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The theoretical basis and programming strategy utilized in the construction of the computer program BEST3D (boundary element solution technology - three dimensional) and detailed input instructions are provided for the use of the program. An extensive set of test cases and sample problems is included in the manual and is also available for distribution with the program. The BEST3D program was developed under the 3-D Inelastic Analysis Methods for Hot Section Components contract (NAS3-23697). The overall objective of this program was the development of new computer programs allowing more accurate and efficient three-dimensional thermal and stress analysis of hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The BEST3D program allows both linear and nonlinear analysis of static and quasi-static elastic problems and transient dynamic analysis for elastic problems. Calculation of elastic natural frequencies and mode shapes is also provided.

  1. A novel approach to multiple sequence alignment using hadoop data grids.

    PubMed

    Sudha Sadasivam, G; Baktavatchalam, G

    2010-01-01

    Multiple alignment of protein sequences helps to determine evolutionary linkage and to predict molecular structures. The factors to be considered while aligning multiple sequences are speed and accuracy of alignment. Although dynamic programming algorithms produce accurate alignments, they are computation intensive. In this paper we propose a time efficient approach to sequence alignment that also produces quality alignment. The dynamic nature of the algorithm coupled with data and computational parallelism of hadoop data grids improves the accuracy and speed of sequence alignment. The principle of block splitting in hadoop coupled with its scalability facilitates alignment of very large sequences.

  2. Optimizing Mars Airplane Trajectory with the Application Navigation System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Riley, Derek

    2004-01-01

    Planning complex missions requires a number of programs to be executed in concert. The Application Navigation System (ANS), developed in the NAS Division, can execute many interdependent programs in a distributed environment. We show that the ANS simplifies user effort and reduces time in optimization of the trajectory of a martian airplane. We use a software package, Cart3D, to evaluate trajectories and a shortest path algorithm to determine the optimal trajectory. ANS employs the GridScape to represent the dynamic state of the available computer resources. Then, ANS uses a scheduler to dynamically assign ready task to machine resources and the GridScape for tracking available resources and forecasting completion time of running tasks. We demonstrate system capability to schedule and run the trajectory optimization application with efficiency exceeding 60% on 64 processors.

  3. Volume Dynamics Propulsion System Modeling for Supersonics Vehicle Research

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Ma, Peter

    2010-01-01

    Under the NASA Fundamental Aeronautics Program the Supersonics Project is working to overcome the obstacles to supersonic commercial flight. The proposed vehicles are long slim body aircraft with pronounced aero-servo-elastic modes. These modes can potentially couple with propulsion system dynamics; leading to performance challenges such as aircraft ride quality and stability. Other disturbances upstream of the engine generated from atmospheric wind gusts, angle of attack, and yaw can have similar effects. In addition, for optimal propulsion system performance, normal inlet-engine operations are required to be closer to compressor stall and inlet unstart. To study these phenomena an integrated model is needed that includes both airframe structural dynamics as well as the propulsion system dynamics. This paper covers the propulsion system component volume dynamics modeling of a turbojet engine that will be used for an integrated vehicle Aero-Propulso-Servo-Elastic model and for propulsion efficiency studies.

  4. Volume Dynamics Propulsion System Modeling for Supersonics Vehicle Research

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Ma, Peter

    2008-01-01

    Under the NASA Fundamental Aeronautics Program, the Supersonics Project is working to overcome the obstacles to supersonic commercial flight. The proposed vehicles are long slim body aircraft with pronounced aero-servo-elastic modes. These modes can potentially couple with propulsion system dynamics; leading to performance challenges such as aircraft ride quality and stability. Other disturbances upstream of the engine generated from atmospheric wind gusts, angle of attack, and yaw can have similar effects. In addition, for optimal propulsion system performance, normal inlet-engine operations are required to be closer to compressor stall and inlet unstart. To study these phenomena an integrated model is needed that includes both airframe structural dynamics as well as the propulsion system dynamics. This paper covers the propulsion system component volume dynamics modeling of a turbojet engine that will be used for an integrated vehicle Aero-Propulso-Servo-Elastic model and for propulsion efficiency studies.

  5. Volume Dynamics Propulsion System Modeling for Supersonics Vehicle Research

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Ma, Peter

    2008-01-01

    Under the NASA Fundamental Aeronautics Program the Supersonics Project is working to overcome the obstacles to supersonic commercial flight. The proposed vehicles are long slim body aircraft with pronounced aero-servo-elastic modes. These modes can potentially couple with propulsion system dynamics; leading to performance challenges such as aircraft ride quality and stability. Other disturbances upstream of the engine generated from atmospheric wind gusts, angle of attack, and yaw can have similar effects. In addition, for optimal propulsion system performance, normal inlet-engine operations are required to be closer to compressor stall and inlet unstart. To study these phenomena an integrated model is needed that includes both airframe structural dynamics as well as the propulsion system dynamics. This paper covers the propulsion system component volume dynamics modeling of a turbojet engine that will be used for an integrated vehicle Aero- Propulso-Servo-Elastic model and for propulsion efficiency studies.

  6. SIMWEST - A simulation model for wind energy storage systems

    NASA Technical Reports Server (NTRS)

    Edsinger, R. W.; Warren, A. W.; Gordon, L. H.; Chang, G. C.

    1978-01-01

    This paper describes a comprehensive and efficient computer program for the modeling of wind energy systems with storage. The level of detail of SIMWEST (SImulation Model for Wind Energy STorage) is consistent with evaluating the economic feasibility as well as the general performance of wind energy systems with energy storage options. The software package consists of two basic programs and a library of system, environmental, and control components. The first program is a precompiler which allows the library components to be put together in building block form. The second program performs the technoeconomic system analysis with the required input/output, and the integration of system dynamics. An example of the application of the SIMWEST program to a current 100 kW wind energy storage system is given.

  7. NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations

    NASA Astrophysics Data System (ADS)

    Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.

    2010-09-01

    The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.

  8. Acoustic environmental accuracy requirements for response determination

    NASA Technical Reports Server (NTRS)

    Pettitt, M. R.

    1983-01-01

    A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.

  9. Fundamental Studies and Development of III-N Visible LEDs for High-Power Solid-State Lighting Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Russell

    The goal of this program is to understand in a fundamental way the impact of strain, defects, polarization, and Stokes loss in relation to unique device structures upon the internal quantum efficiency (IQE) and efficiency droop (ED) of III-nitride (III-N) light-emitting diodes (LEDs) and to employ this understanding in the design and growth of high-efficiency LEDs capable of highly-reliable, high-current, high-power operation. This knowledge will be the basis for our advanced device epitaxial designs that lead to improved device performance. The primary approach we will employ is to exploit new scientific and engineering knowledge generated through the application of amore » set of unique advanced growth and characterization tools to develop new concepts in strain-, polarization-, and carrier dynamics-engineered and low-defect materials and device designs having reduced dislocations and improved carrier collection followed by efficient photon generation. We studied the effects of crystalline defect, polarizations, hole transport, electron-spillover, electron blocking layer, underlying layer below the multiplequantum- well active region, and developed high-efficiency and efficiency-droop-mitigated blue LEDs with a new LED epitaxial structures. We believe new LEDs developed in this program will make a breakthrough in the development of high-efficiency high-power visible III-N LEDs from violet to green spectral region.« less

  10. Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.

    NASA Astrophysics Data System (ADS)

    Velichkin, Vladimir A.; Zavyalov, Vladimir A.

    2018-03-01

    This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.

  11. Computer Science Techniques Applied to Parallel Atomistic Simulation

    NASA Astrophysics Data System (ADS)

    Nakano, Aiichiro

    1998-03-01

    Recent developments in parallel processing technology and multiresolution numerical algorithms have established large-scale molecular dynamics (MD) simulations as a new research mode for studying materials phenomena such as fracture. However, this requires large system sizes and long simulated times. We have developed: i) Space-time multiresolution schemes; ii) fuzzy-clustering approach to hierarchical dynamics; iii) wavelet-based adaptive curvilinear-coordinate load balancing; iv) multilevel preconditioned conjugate gradient method; and v) spacefilling-curve-based data compression for parallel I/O. Using these techniques, million-atom parallel MD simulations are performed for the oxidation dynamics of nanocrystalline Al. The simulations take into account the effect of dynamic charge transfer between Al and O using the electronegativity equalization scheme. The resulting long-range Coulomb interaction is calculated efficiently with the fast multipole method. Results for temperature and charge distributions, residual stresses, bond lengths and bond angles, and diffusivities of Al and O will be presented. The oxidation of nanocrystalline Al is elucidated through immersive visualization in virtual environments. A unique dual-degree education program at Louisiana State University will also be discussed in which students can obtain a Ph.D. in Physics & Astronomy and a M.S. from the Department of Computer Science in five years. This program fosters interdisciplinary research activities for interfacing High Performance Computing and Communications with large-scale atomistic simulations of advanced materials. This work was supported by NSF (CAREER Program), ARO, PRF, and Louisiana LEQSF.

  12. How flexibility and dynamic ground effect could improve bio-inspired propulsion

    NASA Astrophysics Data System (ADS)

    Quinn, Daniel

    2016-11-01

    Swimming animals use complex fin motions to reach remarkable levels of efficiency, maneuverability, and stealth. Propulsion systems inspired by these motions could usher in a new generation of advanced underwater vehicles. Two aspects of bio-inspired propulsion are discussed here: flexibility and near-boundary swimming. Experimental work on flexible propulsors shows that swimming efficiency depends on wake vortex timing and boundary layer attachment, but also on fluid-structure resonance. As a result, flexible vehicles or animals could potentially improve their performance by tracking their resonance properties. Bio-inspired propulsors were also found to produce more thrust with no loss in efficiency when swimming near a solid boundary. Higher lift-to-drag ratios for near-ground fixed-wing gliders is commonly known as ground effect. This newly observed "dynamic ground effect" suggests that bio-inspired vehicles and animals could save energy by harnessing the performance gains associated with near-boundary swimming. This work was supported by the Office of Naval Research (MURI N00014-08-1-0642, Program Director Dr. Bob Brizzolara) and the National Science Foundation (DBI-1062052, PI Lisa Fauci; EFRI-0938043, PI George Lauder).

  13. Cache Locality Optimization for Recursive Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lifflander, Jonathan; Krishnamoorthy, Sriram

    We present an approach to optimize the cache locality for recursive programs by dynamically splicing--recursively interleaving--the execution of distinct function invocations. By utilizing data effect annotations, we identify concurrency and data reuse opportunities across function invocations and interleave them to reduce reuse distance. We present algorithms that efficiently track effects in recursive programs, detect interference and dependencies, and interleave execution of function invocations using user-level (non-kernel) lightweight threads. To enable multi-core execution, a program is parallelized using a nested fork/join programming model. Our cache optimization strategy is designed to work in the context of a random work stealing scheduler. Wemore » present an implementation using the MIT Cilk framework that demonstrates significant improvements in sequential and parallel performance, competitive with a state-of-the-art compile-time optimizer for loop programs and a domain- specific optimizer for stencil programs.« less

  14. Promoting Affordability in Defense Acquisitions: A Multi-Period Portfolio Approach

    DTIC Science & Technology

    2014-04-30

    has evolved out of many areas of research, ranging from economics to modern control theory (Powell, 2011). The general form of a dynamic programming...states 5 School of Aeronautics & Astronautics A Portfolio Approach: Background • Balance expected profit (performance) against risk ( variance ) in...investments (Markowitz 1952) • Efficiency frontier of optimal portfolios given investor risk averseness • Extends to multi-period case with various

  15. Assisting People with Multiple Disabilities and Minimal Motor Behavior to Improve Computer Pointing Efficiency through a Mouse Wheel

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Chang, Man-Ling; Shih, Ching-Tien

    2009-01-01

    This study evaluated whether two people with multiple disabilities and minimal motor behavior would be able to improve their pointing performance using finger poke ability with a mouse wheel through a Dynamic Pointing Assistive Program (DPAP) and a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, changes a…

  16. Assisting People with Multiple Disabilities Improve Their Computer-Pointing Efficiency with Hand Swing through a Standard Mouse

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Chiu, Sheng-Kai; Chu, Chiung-Ling; Shih, Ching-Tien; Liao, Yung-Kun; Lin, Chia-Chen

    2010-01-01

    This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance using hand swing with a standard mouse through an Extended Dynamic Pointing Assistive Program (EDPAP) and a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, and changes a mouse into a precise…

  17. Statistical Inference in Graphical Models

    DTIC Science & Technology

    2008-06-17

    fuse probability theory and graph theory in such a way as to permit efficient rep- resentation and computation with probability distributions. They...message passing. 59 viii 1. INTRODUCTION In approaching real-world problems, we often need to deal with uncertainty. Probability and statis- tics provide a...dynamic programming methods. However, for many sensors of interest, the signal-to-noise ratio does not allow such a treatment. Another source of

  18. Solving the dynamic ambulance relocation and dispatching problem using approximate dynamic programming

    PubMed Central

    Schmid, Verena

    2012-01-01

    Emergency service providers are supposed to locate ambulances such that in case of emergency patients can be reached in a time-efficient manner. Two fundamental decisions and choices need to be made real-time. First of all immediately after a request emerges an appropriate vehicle needs to be dispatched and send to the requests’ site. After having served a request the vehicle needs to be relocated to its next waiting location. We are going to propose a model and solve the underlying optimization problem using approximate dynamic programming (ADP), an emerging and powerful tool for solving stochastic and dynamic problems typically arising in the field of operations research. Empirical tests based on real data from the city of Vienna indicate that by deviating from the classical dispatching rules the average response time can be decreased from 4.60 to 4.01 minutes, which corresponds to an improvement of 12.89%. Furthermore we are going to show that it is essential to consider time-dependent information such as travel times and changes with respect to the request volume explicitly. Ignoring the current time and its consequences thereafter during the stage of modeling and optimization leads to suboptimal decisions. PMID:25540476

  19. Iterative Adaptive Dynamic Programming for Solving Unknown Nonlinear Zero-Sum Game Based on Online Data.

    PubMed

    Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun

    2017-03-01

    H ∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.

  20. Optimal control of hydroelectric facilities

    NASA Astrophysics Data System (ADS)

    Zhao, Guangzhi

    This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the challenging problem of optimizing a sequence of two hydro dams sharing the same river system. The complexity of this problem is magnified and we just scratch its surface here. The thesis concludes with suggestions for future work in this fertile area. Keywords: dynamic programming, hydroelectric facility, optimization, optimal control, switching cost, turbine efficiency.

  1. Sensitivity analysis of dynamic biological systems with time-delays.

    PubMed

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2010-10-15

    Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.

  2. Stereo Image Dense Matching by Integrating Sift and Sgm Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Song, Y.; Lu, J.

    2018-05-01

    Semi-global matching(SGM) performs the dynamic programming by treating the different path directions equally. It does not consider the impact of different path directions on cost aggregation, and with the expansion of the disparity search range, the accuracy and efficiency of the algorithm drastically decrease. This paper presents a dense matching algorithm by integrating SIFT and SGM. It takes the successful matching pairs matched by SIFT as control points to direct the path in dynamic programming with truncating error propagation. Besides, matching accuracy can be improved by using the gradient direction of the detected feature points to modify the weights of the paths in different directions. The experimental results based on Middlebury stereo data sets and CE-3 lunar data sets demonstrate that the proposed algorithm can effectively cut off the error propagation, reduce disparity search range and improve matching accuracy.

  3. A Novel Joint Problem of Routing, Scheduling, and Variable-Width Channel Allocation in WMNs

    PubMed Central

    Liu, Wan-Yu; Chou, Chun-Hung

    2014-01-01

    This paper investigates a novel joint problem of routing, scheduling, and channel allocation for single-radio multichannel wireless mesh networks in which multiple channel widths can be adjusted dynamically through a new software technology so that more concurrent transmissions and suppressed overlapping channel interference can be achieved. Although the previous works have studied this joint problem, their linear programming models for the problem were not incorporated with some delicate constraints. As a result, this paper first constructs a linear programming model with more practical concerns and then proposes a simulated annealing approach with a novel encoding mechanism, in which the configurations of multiple time slots are devised to characterize the dynamic transmission process. Experimental results show that our approach can find the same or similar solutions as the optimal solutions for smaller-scale problems and can efficiently find good-quality solutions for a variety of larger-scale problems. PMID:24982990

  4. Estimating Arrhenius parameters using temperature programmed molecular dynamics.

    PubMed

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-07-21

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  5. Penalty dynamic programming algorithm for dim targets detection in sensor systems.

    PubMed

    Huang, Dayu; Xue, Anke; Guo, Yunfei

    2012-01-01

    In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations.

  6. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  7. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  8. In silico FRET from simulated dye dynamics

    NASA Astrophysics Data System (ADS)

    Hoefling, Martin; Grubmüller, Helmut

    2013-03-01

    Single molecule fluorescence resonance energy transfer (smFRET) experiments probe molecular distances on the nanometer scale. In such experiments, distances are recorded from FRET transfer efficiencies via the Förster formula, E=1/(1+(). The energy transfer however also depends on the mutual orientation of the two dyes used as distance reporter. Since this information is typically inaccessible in FRET experiments, one has to rely on approximations, which reduce the accuracy of these distance measurements. A common approximation is an isotropic and uncorrelated dye orientation distribution. To assess the impact of such approximations, we present the algorithms and implementation of a computational toolkit for the simulation of smFRET on the basis of molecular dynamics (MD) trajectory ensembles. In this study, the dye orientation dynamics, which are used to determine dynamic FRET efficiencies, are extracted from MD simulations. In a subsequent step, photons and bursts are generated using a Monte Carlo algorithm. The application of the developed toolkit on a poly-proline system demonstrated good agreement between smFRET simulations and experimental results and therefore confirms our computational method. Furthermore, it enabled the identification of the structural basis of measured heterogeneity. The presented computational toolkit is written in Python, available as open-source, applicable to arbitrary systems and can easily be extended and adapted to further problems. Catalogue identifier: AENV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv3, the bundled SIMD friendly Mersenne twister implementation [1] is provided under the SFMT-License. No. of lines in distributed program, including test data, etc.: 317880 No. of bytes in distributed program, including test data, etc.: 54774217 Distribution format: tar.gz Programming language: Python, Cython, C (ANSI C99). Computer: Any (see memory requirements). Operating system: Any OS with CPython distribution (e.g. Linux, MacOSX, Windows). Has the code been vectorised or parallelized?: Yes, in Ref. [2], 4 CPU cores were used. RAM: About 700MB per process for the simulation setup in Ref. [2]. Classification: 16.1, 16.7, 23. External routines: Calculation of Rκ2-trajectories from GROMACS [3] MD trajectories requires the GromPy Python module described in Ref. [4] or a GROMACS 4.6 installation. The md2fret program uses a standard Python interpreter (CPython) v2.6+ and < v3.0 as well as the NumPy module. The analysis examples require the Matplotlib Python module. Nature of problem: Simulation and interpretation of single molecule FRET experiments. Solution method: Combination of force-field based molecular dynamics (MD) simulating the dye dynamics and Monte Carlo sampling to obtain photon statistics of FRET kinetics. Additional comments: !!!!! The distribution file for this program is over 50 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: A single run in Ref. [2] takes about 10 min on a Quad Core Intel Xeon CPU W3520 2.67GHz with 6GB physical RAM References: [1] M. Saito, M. Matsumoto, SIMD-oriented fast Mersenne twister: a 128-bit pseudorandom number generator, in: A. Keller, S. Heinrich, H. Niederreiter (Eds.), Monte Carlo and Quasi-Monte Carlo Methods 2006, Springer; Berlin, Heidelberg, 2008, pp. 607-622. [2] M. Hoefling, N. Lima, D. Hänni, B. Schuler, C. A. M. Seidel, H. Grubmüller, Structural heterogeneity and quantitative FRET efficiency distributions of polyprolines through a hybrid atomistic simulation and Monte Carlo approach, PLoS ONE 6 (5) (2011) e19791. [3] D. V. D. Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark, H. J. C. Berendsen, GROMACS: fast, flexible, and free., J Comput Chem 26 (16) (2005) 1701-1718. [4] R. Pool, A. Feenstra, M. Hoefling, R. Schulz, J. C. Smith, J. Heringa, Enabling grand-canonical Monte Carlo: Extending the flexibility of gromacs through the GromPy Python interface module, Journal of Chemical Theory and Computation 33 (12) (2012) 1207-1214.

  9. New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid

    2017-09-01

    In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.

  10. A graph-based evolutionary algorithm: Genetic Network Programming (GNP) and its extension using reinforcement learning.

    PubMed

    Mabu, Shingo; Hirasawa, Kotaro; Hu, Jinglu

    2007-01-01

    This paper proposes a graph-based evolutionary algorithm called Genetic Network Programming (GNP). Our goal is to develop GNP, which can deal with dynamic environments efficiently and effectively, based on the distinguished expression ability of the graph (network) structure. The characteristics of GNP are as follows. 1) GNP programs are composed of a number of nodes which execute simple judgment/processing, and these nodes are connected by directed links to each other. 2) The graph structure enables GNP to re-use nodes, thus the structure can be very compact. 3) The node transition of GNP is executed according to its node connections without any terminal nodes, thus the past history of the node transition affects the current node to be used and this characteristic works as an implicit memory function. These structural characteristics are useful for dealing with dynamic environments. Furthermore, we propose an extended algorithm, "GNP with Reinforcement Learning (GNPRL)" which combines evolution and reinforcement learning in order to create effective graph structures and obtain better results in dynamic environments. In this paper, we applied GNP to the problem of determining agents' behavior to evaluate its effectiveness. Tileworld was used as the simulation environment. The results show some advantages for GNP over conventional methods.

  11. On Using Surrogates with Genetic Programming.

    PubMed

    Hildebrandt, Torsten; Branke, Jürgen

    2015-01-01

    One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.

  12. A dynamically adaptive multigrid algorithm for the incompressible Navier-Stokes equations: Validation and model problems

    NASA Technical Reports Server (NTRS)

    Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.

    1991-01-01

    An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.

  13. QMMMW: A wrapper for QM/MM simulations with QUANTUM ESPRESSO and LAMMPS

    NASA Astrophysics Data System (ADS)

    Ma, Changru; Martin-Samos, Layla; Fabris, Stefano; Laio, Alessandro; Piccinin, Simone

    2015-10-01

    We present QMMMW, a new program aimed at performing Quantum Mechanics/Molecular Mechanics (QM/MM) molecular dynamics. The package operates as a wrapper that patches PWscf code included in the QUANTUM ESPRESSO distribution and LAMMPS Molecular Dynamics Simulator. It is designed with a paradigm based on three guidelines: (i) minimal amount of modifications on the parent codes, (ii) flexibility and computational efficiency of the communication layer and (iii) accuracy of the Hamiltonian describing the interaction between the QM and MM subsystems. These three features are seldom present simultaneously in other implementations of QMMM. The QMMMW project is hosted by qe-forge at

  14. The constraint method: A new finite element technique. [applied to static and dynamic loads on plates

    NASA Technical Reports Server (NTRS)

    Tsai, C.; Szabo, B. A.

    1973-01-01

    An approch to the finite element method which utilizes families of conforming finite elements based on complete polynomials is presented. Finite element approximations based on this method converge with respect to progressively reduced element sizes as well as with respect to progressively increasing orders of approximation. Numerical results of static and dynamic applications of plates are presented to demonstrate the efficiency of the method. Comparisons are made with plate elements in NASTRAN and the high-precision plate element developed by Cowper and his co-workers. Some considerations are given to implementation of the constraint method into general purpose computer programs such as NASTRAN.

  15. Hydrogen-oxygen auxiliary propulsion for the space shuttle. Volume 1: High pressure thrusters

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Technology for long life, high performing, gaseous hydrogen-gaseous oxygen rocket engines suitable for auxiliary propulsion was provided by a combined analytical and experimental program. Propellant injectors, fast response valves, igniters, and regeneratively and film-cooled thrust chambers were tested over a wide range of operating conditions. Data generated include performance, combustion efficiency, thermal characteristics film cooling effectiveness, dynamic response in pulsing, and cycle life limitations.

  16. Assisting People with Multiple Disabilities and Minimal Motor Behavior to Improve Computer Drag-and-Drop Efficiency through a Mouse Wheel

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang

    2011-01-01

    This study evaluated whether two people with multiple disabilities and minimal motor behavior would be able to improve their Drag-and-Drop (DnD) performance using their finger/thumb poke ability with a mouse scroll wheel through a Dynamic Drag-and-Drop Assistive Program (DDnDAP). A multiple probe design across participants was used in this study…

  17. Multimodal Logistics Network Design over Planning Horizon through a Hybrid Meta-Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Shimizu, Yoshiaki; Yamazaki, Yoshihiro; Wada, Takeshi

    Logistics has been acknowledged increasingly as a key issue of supply chain management to improve business efficiency under global competition and diversified customer demands. This study aims at improving a quality of strategic decision making associated with dynamic natures in logistics network optimization. Especially, noticing an importance to concern with a multimodal logistics under multiterms, we have extended a previous approach termed hybrid tabu search (HybTS). The attempt intends to deploy a strategic planning more concretely so that the strategic plan can link to an operational decision making. The idea refers to a smart extension of the HybTS to solve a dynamic mixed integer programming problem. It is a two-level iterative method composed of a sophisticated tabu search for the location problem at the upper level and a graph algorithm for the route selection at the lower level. To keep efficiency while coping with the resulting extremely large-scale problem, we invented a systematic procedure to transform the original linear program at the lower-level into a minimum cost flow problem solvable by the graph algorithm. Through numerical experiments, we verified the proposed method outperformed the commercial software. The results indicate the proposed approach can make the conventional strategic decision much more practical and is promising for real world applications.

  18. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  19. Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach

    NASA Astrophysics Data System (ADS)

    Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun

    2015-02-01

    The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.

  20. High effective inverse dynamics modelling for dual-arm robot

    NASA Astrophysics Data System (ADS)

    Shen, Haoyu; Liu, Yanli; Wu, Hongtao

    2018-05-01

    To deal with the problem of inverse dynamics modelling for dual arm robot, a recursive inverse dynamics modelling method based on decoupled natural orthogonal complement is presented. In this model, the concepts and methods of Decoupled Natural Orthogonal Complement matrices are used to eliminate the constraint forces in the Newton-Euler kinematic equations, and the screws is used to express the kinematic and dynamics variables. On this basis, the paper has developed a special simulation program with symbol software of Mathematica and conducted a simulation research on the a dual-arm robot. Simulation results show that the proposed method based on decoupled natural orthogonal complement can save an enormous amount of CPU time that was spent in computing compared with the recursive Newton-Euler kinematic equations and the results is correct and reasonable, which can verify the reliability and efficiency of the method.

  1. The modern temperature-accelerated dynamics approach

    DOE PAGES

    Zamora, Richard J.; Uberuaga, Blas P.; Perez, Danny; ...

    2016-06-01

    Accelerated molecular dynamics (AMD) is a class of MD-based methods used to simulate atomistic systems in which the metastable state-to-state evolution is slow compared with thermal vibrations. Temperature-accelerated dynamics (TAD) is a particularly efficient AMD procedure in which the predicted evolution is hastened by elevating the temperature of the system and then recovering the correct state-to-state dynamics at the temperature of interest. TAD has been used to study various materials applications, often revealing surprising behavior beyond the reach of direct MD. This success has inspired several algorithmic performance enhancements, as well as the analysis of its mathematical framework. Recently, thesemore » enhancements have leveraged parallel programming techniques to enhance both the spatial and temporal scaling of the traditional approach. Here, we review the ongoing evolution of the modern TAD method and introduce the latest development: speculatively parallel TAD.« less

  2. ISCFD Nagoya 1989 - International Symposium on Computational Fluid Dynamics, 3rd, Nagoya, Japan, Aug. 28-31, 1989, Technical Papers

    NASA Astrophysics Data System (ADS)

    Recent advances in computational fluid dynamics are discussed in reviews and reports. Topics addressed include large-scale LESs for turbulent pipe and channel flows, numerical solutions of the Euler and Navier-Stokes equations on parallel computers, multigrid methods for steady high-Reynolds-number flow past sudden expansions, finite-volume methods on unstructured grids, supersonic wake flow on a blunt body, a grid-characteristic method for multidimensional gas dynamics, and CIC numerical simulation of a wave boundary layer. Consideration is given to vortex simulations of confined two-dimensional jets, supersonic viscous shear layers, spectral methods for compressible flows, shock-wave refraction at air/water interfaces, oscillatory flow in a two-dimensional collapsible channel, the growth of randomness in a spatially developing wake, and an efficient simplex algorithm for the finite-difference and dynamic linear-programming method in optimal potential control.

  3. Improving the efficiency of the Finite Temperature Density Matrix Renormalization Group method

    NASA Astrophysics Data System (ADS)

    Nocera, Alberto; Alvarez, Gonzalo

    I review the basics of the finite temperature DMRG method, and then show how its efficiency can be improved by working on reduced Hilbert spaces and by using canonical approaches. My talk explains the applicability of the ancilla DMRG method beyond spins systems to t-J and Hubbard models, and addresses the computation of static and dynamical observables at finite temperature. Finally, I discuss the features of and roadmap for our DMRG + + codebase. Work done at CNMS, sponsored by the SUF Division, BES, U.S. DOE under contract with UT-Battelle. Support by the early career research program, DSUF, BES, DOE.

  4. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less

  5. Reentry trajectory optimization with waypoint and no-fly zone constraints using multiphase convex programming

    NASA Astrophysics Data System (ADS)

    Zhao, Dang-Jun; Song, Zheng-Yu

    2017-08-01

    This study proposes a multiphase convex programming approach for rapid reentry trajectory generation that satisfies path, waypoint and no-fly zone (NFZ) constraints on Common Aerial Vehicles (CAVs). Because the time when the vehicle reaches the waypoint is unknown, the trajectory of the vehicle is divided into several phases according to the prescribed waypoints, rendering a multiphase optimization problem with free final time. Due to the requirement of rapidity, the minimum flight time of each phase index is preferred over other indices in this research. The sequential linearization is used to approximate the nonlinear dynamics of the vehicle as well as the nonlinear concave path constraints on the heat rate, dynamic pressure, and normal load; meanwhile, the convexification techniques are proposed to relax the concave constraints on control variables. Next, the original multiphase optimization problem is reformulated as a standard second-order convex programming problem. Theoretical analysis is conducted to show that the original problem and the converted problem have the same solution. Numerical results are presented to demonstrate that the proposed approach is efficient and effective.

  6. The laboratory efficiencies initiative: partnership for building a sustainable national public health laboratory system.

    PubMed

    Ridderhof, John C; Moulton, Anthony D; Ned, Renée M; Nicholson, Janet K A; Chu, May C; Becker, Scott J; Blank, Eric C; Breckenridge, Karen J; Waddell, Victor; Brokopp, Charles

    2013-01-01

    Beginning in early 2011, the Centers for Disease Control and Prevention and the Association of Public Health Laboratories launched the Laboratory Efficiencies Initiative (LEI) to help public health laboratories (PHLs) and the nation's entire PHL system achieve and maintain sustainability to continue to conduct vital services in the face of unprecedented financial and other pressures. The LEI focuses on stimulating substantial gains in laboratories' operating efficiency and cost efficiency through the adoption of proven and promising management practices. In its first year, the LEI generated a strategic plan and a number of resources that PHL directors can use toward achieving LEI goals. Additionally, the first year saw the formation of a dynamic community of practitioners committed to implementing the LEI strategic plan in coordination with state and local public health executives, program officials, foundations, and other key partners.

  7. The Laboratory Efficiencies Initiative: Partnership for Building a Sustainable National Public Health Laboratory System

    PubMed Central

    Moulton, Anthony D.; Ned, Renée M.; Nicholson, Janet K.A.; Chu, May C.; Becker, Scott J.; Blank, Eric C.; Breckenridge, Karen J.; Waddell, Victor; Brokopp, Charles

    2013-01-01

    Beginning in early 2011, the Centers for Disease Control and Prevention and the Association of Public Health Laboratories launched the Laboratory Efficiencies Initiative (LEI) to help public health laboratories (PHLs) and the nation's entire PHL system achieve and maintain sustainability to continue to conduct vital services in the face of unprecedented financial and other pressures. The LEI focuses on stimulating substantial gains in laboratories' operating efficiency and cost efficiency through the adoption of proven and promising management practices. In its first year, the LEI generated a strategic plan and a number of resources that PHL directors can use toward achieving LEI goals. Additionally, the first year saw the formation of a dynamic community of practitioners committed to implementing the LEI strategic plan in coordination with state and local public health executives, program officials, foundations, and other key partners. PMID:23997300

  8. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    PubMed

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  9. A Probabilistic Assessment of NASA Ultra-Efficient Engine Technologies for a Large Subsonic Transport

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Jones, Scott M.; Arcara, Philip C., Jr.; Haller, William J.

    2004-01-01

    NASA's Ultra Efficient Engine Technology (UEET) program features advanced aeropropulsion technologies that include highly loaded turbomachinery, an advanced low-NOx combustor, high-temperature materials, intelligent propulsion controls, aspirated seal technology, and an advanced computational fluid dynamics (CFD) design tool to help reduce airplane drag. A probabilistic system assessment is performed to evaluate the impact of these technologies on aircraft fuel burn and NOx reductions. A 300-passenger aircraft, with two 396-kN thrust (85,000-pound) engines is chosen for the study. The results show that a large subsonic aircraft equipped with the UEET technologies has a very high probability of meeting the UEET Program goals for fuel-burn (or equivalent CO2) reduction (15% from the baseline) and LTO (landing and takeoff) NOx reductions (70% relative to the 1996 International Civil Aviation Organization rule). These results are used to provide guidance for developing a robust UEET technology portfolio, and to prioritize the most promising technologies required to achieve UEET program goals for the fuel-burn and NOx reductions.

  10. A nonlinear Kalman filtering approach to embedded control of turbocharged diesel engines

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Arsie, Ivan

    2014-10-01

    The development of efficient embedded control for turbocharged Diesel engines, requires the programming of elaborated nonlinear control and filtering methods. To this end, in this paper nonlinear control for turbocharged Diesel engines is developed with the use of Differential flatness theory and the Derivative-free nonlinear Kalman Filter. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. To compensate for modeling errors and external disturbances the Derivative-free nonlinear Kalman Filter is used and redesigned as a disturbance observer. The filter consists of the Kalman Filter recursion on the linearized equivalent of the Diesel engine model and of an inverse transformation based on differential flatness theory which enables to obtain estimates for the state variables of the initial nonlinear model. Once the disturbances variables are identified it is possible to compensate them by including an additional control term in the feedback loop. The efficiency of the proposed control method is tested through simulation experiments.

  11. VASP- VARIABLE DIMENSION AUTOMATIC SYNTHESIS PROGRAM

    NASA Technical Reports Server (NTRS)

    White, J. S.

    1994-01-01

    VASP is a variable dimension Fortran version of the Automatic Synthesis Program, ASP. The program is used to implement Kalman filtering and control theory. Basically, it consists of 31 subprograms for solving most modern control problems in linear, time-variant (or time-invariant) control systems. These subprograms include operations of matrix algebra, computation of the exponential of a matrix and its convolution integral, and the solution of the matrix Riccati equation. The user calls these subprograms by means of a FORTRAN main program, and so can easily obtain solutions to most general problems of extremization of a quadratic functional of the state of the linear dynamical system. Particularly, these problems include the synthesis of the Kalman filter gains and the optimal feedback gains for minimization of a quadratic performance index. VASP, as an outgrowth of the Automatic Synthesis Program, has the following improvements: more versatile programming language; more convenient input/output format; some new subprograms which consolidate certain groups of statements that are often repeated; and variable dimensioning. The pertinent difference between the two programs is that VASP has variable dimensioning and more efficient storage. The documentation for the VASP program contains a VASP dictionary and example problems. The dictionary contains a description of each subroutine and instructions on its use. The example problems include dynamic response, optimal control gain, solution of the sampled data matrix Riccati equation, matrix decomposition, and a pseudo-inverse of a matrix. This program is written in FORTRAN IV and has been implemented on the IBM 360. The VASP program was developed in 1971.

  12. Library Service, A Statement of Policy and Proposed Action by the Regents of the University of the State of New York.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany.

    A plan of action by which library service and development in New York State can respond to rapid and dynamic changes in society is outlined. The program is based on the principles that any state resident has a right to convenient access to local libraries to meet his needs, that statewide library networks constitute the most efficient means to…

  13. Integrated multidisciplinary analysis tool IMAT users' guide

    NASA Technical Reports Server (NTRS)

    Meissner, Frances T. (Editor)

    1988-01-01

    The Integrated Multidisciplinary Analysis Tool (IMAT) is a computer software system developed at Langley Research Center. IMAT provides researchers and analysts with an efficient capability to analyze satellite controls systems influenced by structural dynamics. Using a menu-driven executive system, IMAT leads the user through the program options. IMAT links a relational database manager to commercial and in-house structural and controls analysis codes. This paper describes the IMAT software system and how to use it.

  14. Navy Ford (CVN-78) Class Aircraft Carrier Program: Background and Issues for Congress

    DTIC Science & Technology

    2016-03-08

    limited. Yet, it is not too late to examine the carrier’s acquisition history to illustrate the dynamics of shipbuilding—and weapon system—acquisition...rates and the investments needed by the shipbuilder to achieve these efficiencies.31 Later in the hearing, Stackley testified that the history in...for all work packages in accordance with the integrated master schedule;  zero delinquent engineering and planning products;  resolution of

  15. Parallel cascade selection molecular dynamics for efficient conformational sampling and free energy calculation of proteins

    NASA Astrophysics Data System (ADS)

    Kitao, Akio; Harada, Ryuhei; Nishihara, Yasutaka; Tran, Duy Phuoc

    2016-12-01

    Parallel Cascade Selection Molecular Dynamics (PaCS-MD) was proposed as an efficient conformational sampling method to investigate conformational transition pathway of proteins. In PaCS-MD, cycles of (i) selection of initial structures for multiple independent MD simulations and (ii) conformational sampling by independent MD simulations are repeated until the convergence of the sampling. The selection is conducted so that protein conformation gradually approaches a target. The selection of snapshots is a key to enhance conformational changes by increasing the probability of rare event occurrence. Since the procedure of PaCS-MD is simple, no modification of MD programs is required; the selections of initial structures and the restart of the next cycle in the MD simulations can be handled with relatively simple scripts with straightforward implementation. Trajectories generated by PaCS-MD were further analyzed by the Markov state model (MSM), which enables calculation of free energy landscape. The combination of PaCS-MD and MSM is reported in this work.

  16. SEASAT economic assessment

    NASA Technical Reports Server (NTRS)

    Hicks, K.; Steele, W.

    1974-01-01

    The SEASAT program will provide scientific and economic benefits from global remote sensing of the ocean's dynamic and physical characteristics. The program as presently envisioned consists of: (1) SEASAT A; (2) SEASAT B; and (3) Operational SEASAT. This economic assessment was to identify, rationalize, quantify and validate the economic benefits evolving from SEASAT. These benefits will arise from improvements in the operating efficiency of systems that interface with the ocean. SEASAT data will be combined with data from other ocean and atmospheric sampling systems and then processed through analytical models of the interaction between oceans and atmosphere to yield accurate global measurements and global long range forecasts of ocean conditions and weather.

  17. Multivariate areal analysis of the impact and efficiency of the family planning programme in peninsular Malaysia.

    PubMed

    Tan Boon Ann

    1987-06-01

    The findings of the final phase of a 3-phase multivariate areal analysis study undertaken by the Economic and Social Commission for Asia and the Pacific (ESCAP) in 5 countries of the Asian and Pacific Region, including Malaysia, to examine the impact of family planning programs on fertility and reproduction are reported. The study used Malaysia's administrative district as the unit of analysis because the administration and implementation of socioeconomic development activities, as well as the family planning program, depend to a large extent on the decisions of local organizations at the district or state level. In phase 1, existing program and nonprogram data were analyzed using the multivariate technique to separate the impact of the family planning program net of other developmental efforts. The methodology in the 2nd phase consisted of in-depth investigation of selected areas in order to discern the dynamics and determinants of efficiency. The insights gained in phase 2 regarding dynamics of performance were used in phase 3 to refine the input variables of the phase 1 model. Thereafter, the phase 1 analysis was repeated. Insignificant variables and factors were trimmed in order to present a simplified model for studying the impact of environmental, socioeconomic development, family planning programs, and related factors on fertility. The inclusion of a set of family planning program and development variables in phase 3 increased the predictive power of the impact model. THe explained variance for total fertility rate (TFR) of women under 30 years increased from 71% in phase 1 to 79%. It also raised the explained variance of the efficiency model from 34% to 70%. For women age 30 years and older, their TFR was affected directly by the ethnic composition variable (.76), secondary educational status (-.45), and modern nonagricultural occupation (.42), among others. When controlled for other socioeconomic development and environmental indicators, the nonagricultural activities had a positive direct effect on TFR. No direct effects were found to come from other socioeconomic development indicators, once these factors were controlled. The 3 factors that had direct effects on the fertility of women below age 30 were ethnic composition (.33), contraceptive pevalence (-.32), and secondary educational status (-.25). Other family planning program variables (contraceptive knowledge) and socioeconomic development indicators (exposure to modernization as measured by television ownership and health/living conditions as measured by infant mortality rate) affected fertility significantly but indirectly.

  18. Girsanov reweighting for path ensembles and Markov state models

    NASA Astrophysics Data System (ADS)

    Donati, L.; Hartmann, C.; Keller, B. G.

    2017-06-01

    The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.

  19. Ultra-Low Power Dynamic Knob in Adaptive Compressed Sensing Towards Biosignal Dynamics.

    PubMed

    Wang, Aosen; Lin, Feng; Jin, Zhanpeng; Xu, Wenyao

    2016-06-01

    Compressed sensing (CS) is an emerging sampling paradigm in data acquisition. Its integrated analog-to-information structure can perform simultaneous data sensing and compression with low-complexity hardware. To date, most of the existing CS implementations have a fixed architectural setup, which lacks flexibility and adaptivity for efficient dynamic data sensing. In this paper, we propose a dynamic knob (DK) design to effectively reconfigure the CS architecture by recognizing the biosignals. Specifically, the dynamic knob design is a template-based structure that comprises a supervised learning module and a look-up table module. We model the DK performance in a closed analytic form and optimize the design via a dynamic programming formulation. We present the design on a 130 nm process, with a 0.058 mm (2) fingerprint and a 187.88 nJ/event energy-consumption. Furthermore, we benchmark the design performance using a publicly available dataset. Given the energy constraint in wireless sensing, the adaptive CS architecture can consistently improve the signal reconstruction quality by more than 70%, compared with the traditional CS. The experimental results indicate that the ultra-low power dynamic knob can provide an effective adaptivity and improve the signal quality in compressed sensing towards biosignal dynamics.

  20. Fluid Dynamics of Underwater Flight in Sea Butterflies: Insights from Computational Modeling

    NASA Astrophysics Data System (ADS)

    Zhou, Zhuoyu; Mittal, Rajat; Yen, Jeannette; Webster, Donald

    2014-11-01

    Sea butterflies such as Limacine helicina swim by flapping their wing-like parapodia, in a stroke that exhibits a clap-and-fling type kinematics as well as a strong interaction between the parapodia and the body of the animal at the end of downstroke. We used numerical simulations based on videogrammetric data to examine the fluid dynamics and force generation associated with this swimming motion. The unsteady lift-generating mechanism of clap-and-fling results in a sawtooth trajectory with a characteristic ``wobble'' in pitch. We employ coupled flow-body-dynamics simulations to model the free-swimming motion of the organism and explore the efficiency of propulsion as well the factors such as shell weight, that affect its sawtooth swimming trajectory. This work is funded by NSF Grant 1246317 from the Division of Polar Programs.

  1. Trajectory design strategies that incorporate invariant manifolds and swingby

    NASA Technical Reports Server (NTRS)

    Guzman, J. J.; Cooley, D. S.; Howell, K. C.; Folta, D. C.

    1998-01-01

    Libration point orbits serve as excellent platforms for scientific investigations involving the Sun as well as planetary environments. Trajectory design in support of such missions is increasingly challenging as more complex missions are envisioned in the next few decades. Software tools for trajectory design in this regime must be further developed to incorporate better understanding of the solution space and, thus, improve the efficiency and expand the capabilities of current approaches. Only recently applied to trajectory design, dynamical systems theory now offers new insights into the natural dynamics associated with the multi-body problem. The goal of this effort is the blending of analysis from dynamical systems theory with the well established NASA Goddard software program SWINGBY to enhance and expand the capabilities for mission design. Basic knowledge concerning the solution space is improved as well.

  2. A Worst-Case Approach for On-Line Flutter Prediction

    NASA Technical Reports Server (NTRS)

    Lind, Rick C.; Brenner, Martin J.

    1998-01-01

    Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.

  3. Optimal Energy Management for Microgrids

    NASA Astrophysics Data System (ADS)

    Zhao, Zheng

    Microgrid is a recent novel concept in part of the development of smart grid. A microgrid is a low voltage and small scale network containing both distributed energy resources (DERs) and load demands. Clean energy is encouraged to be used in a microgrid for economic and sustainable reasons. A microgrid can have two operational modes, the stand-alone mode and grid-connected mode. In this research, a day-ahead optimal energy management for a microgrid under both operational modes is studied. The objective of the optimization model is to minimize fuel cost, improve energy utilization efficiency and reduce gas emissions by scheduling generations of DERs in each hour on the next day. Considering the dynamic performance of battery as Energy Storage System (ESS), the model is featured as a multi-objectives and multi-parametric programming constrained by dynamic programming, which is proposed to be solved by using the Advanced Dynamic Programming (ADP) method. Then, factors influencing the battery life are studied and included in the model in order to obtain an optimal usage pattern of battery and reduce the correlated cost. Moreover, since wind and solar generation is a stochastic process affected by weather changes, the proposed optimization model is performed hourly to track the weather changes. Simulation results are compared with the day-ahead energy management model. At last, conclusions are presented and future research in microgrid energy management is discussed.

  4. Cascaded Kalman and particle filters for photogrammetry based gyroscope drift and robot attitude estimation.

    PubMed

    Sadaghzadeh N, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin

    2014-03-01

    Based on a cascaded Kalman-Particle Filtering, gyroscope drift and robot attitude estimation method is proposed in this paper. Due to noisy and erroneous measurements of MEMS gyroscope, it is combined with Photogrammetry based vision navigation scenario. Quaternions kinematics and robot angular velocity dynamics with augmented drift dynamics of gyroscope are employed as system state space model. Nonlinear attitude kinematics, drift and robot angular movement dynamics each in 3 dimensions result in a nonlinear high dimensional system. To reduce the complexity, we propose a decomposition of system to cascaded subsystems and then design separate cascaded observers. This design leads to an easier tuning and more precise debugging from the perspective of programming and such a setting is well suited for a cooperative modular system with noticeably reduced computation time. Kalman Filtering (KF) is employed for the linear and Gaussian subsystem consisting of angular velocity and drift dynamics together with gyroscope measurement. The estimated angular velocity is utilized as input of the second Particle Filtering (PF) based observer in two scenarios of stochastic and deterministic inputs. Simulation results are provided to show the efficiency of the proposed method. Moreover, the experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. © 2013 ISA Published by ISA All rights reserved.

  5. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Wang, Peng; Plimpton, Steven J

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less

  6. Energy efficient motion control of the electric bus on route

    NASA Astrophysics Data System (ADS)

    Kotiev, G. O.; Butarovich, D. O.; Kositsyn, B. B.

    2018-02-01

    At present, the urgent problem is the reduction of energy costs of urban motor transport. The article proposes a method of solving this problem by developing an energy-efficient law governing the movement of an electric bus along a city route. To solve this problem, an algorithm is developed based on the dynamic programming method. The proposed method allows you to take into account the constraints imposed on the phase coordinates, control action, as well as on the time of the route. In the course of solving the problem, the model of rectilinear motion of an electric bus on a horizontal reference surface is considered, taking into account the assumptions that allow it to be adapted for the implementation of the method. For the formation of a control action in the equations of motion dynamics, an algorithm for changing the traction / braking torque on the wheels of an electric bus is considered, depending on the magnitude of the control parameter and the speed of motion. An optimal phase trajectory was obtained on a selected section of the road for the prototype of an electric bus. The article presents the comparison of simulation results obtained with the optimal energy efficient control law with the results obtained by a test driver. The comparison proved feasibility of the energy efficient control law for the automobile city electric transport.

  7. A method for solution of the Euler-Bernoulli beam equation in flexible-link robotic systems

    NASA Technical Reports Server (NTRS)

    Tzes, Anthony P.; Yurkovich, Stephen; Langer, F. Dieter

    1989-01-01

    An efficient numerical method for solving the partial differential equation (PDE) governing the flexible manipulator control dynamics is presented. A finite-dimensional model of the equation is obtained through discretization in both time and space coordinates by using finite-difference approximations to the PDE. An expert program written in the Macsyma symbolic language is utilized in order to embed the boundary conditions into the program, accounting for a mass carried at the tip of the manipulator. The advantages of the proposed algorithm are many, including the ability to (1) include any distributed actuation term in the partial differential equation, (2) provide distributed sensing of the beam displacement, (3) easily modify the boundary conditions through an expert program, and (4) modify the structure for running under a multiprocessor environment.

  8. SARANA: language, compiler and run-time system support for spatially aware and resource-aware mobile computing.

    PubMed

    Hari, Pradip; Ko, Kevin; Koukoumidis, Emmanouil; Kremer, Ulrich; Martonosi, Margaret; Ottoni, Desiree; Peh, Li-Shiuan; Zhang, Pei

    2008-10-28

    Increasingly, spatial awareness plays a central role in many distributed and mobile computing applications. Spatially aware applications rely on information about the geographical position of compute devices and their supported services in order to support novel functionality. While many spatial application drivers already exist in mobile and distributed computing, very little systems research has explored how best to program these applications, to express their spatial and temporal constraints, and to allow efficient implementations on highly dynamic real-world platforms. This paper proposes the SARANA system architecture, which includes language and run-time system support for spatially aware and resource-aware applications. SARANA allows users to express spatial regions of interest, as well as trade-offs between quality of result (QoR), latency and cost. The goal is to produce applications that use resources efficiently and that can be run on diverse resource-constrained platforms ranging from laptops to personal digital assistants and to smart phones. SARANA's run-time system manages QoR and cost trade-offs dynamically by tracking resource availability and locations, brokering usage/pricing agreements and migrating programs to nodes accordingly. A resource cost model permeates the SARANA system layers, permitting users to express their resource needs and QoR expectations in units that make sense to them. Although we are still early in the system development, initial versions have been demonstrated on a nine-node system prototype.

  9. Energy efficient sensor scheduling with a mobile sink node for the target tracking application.

    PubMed

    Maheswararajah, Suhinthan; Halgamuge, Saman; Premaratne, Malin

    2009-01-01

    Measurement losses adversely affect the performance of target tracking. The sensor network's life span depends on how efficiently the sensor nodes consume energy. In this paper, we focus on minimizing the total energy consumed by the sensor nodes whilst avoiding measurement losses. Since transmitting data over a long distance consumes a significant amount of energy, a mobile sink node collects the measurements and transmits them to the base station. We assume that the default transmission range of the activated sensor node is limited and it can be increased to maximum range only if the mobile sink node is out-side the default transmission range. Moreover, the active sensor node can be changed after a certain time period. The problem is to select an optimal sensor sequence which minimizes the total energy consumed by the sensor nodes. In this paper, we consider two different problems depend on the mobile sink node's path. First, we assume that the mobile sink node's position is known for the entire time horizon and use the dynamic programming technique to solve the problem. Second, the position of the sink node is varied over time according to a known Markov chain, and the problem is solved by stochastic dynamic programming. We also present sub-optimal methods to solve our problem. A numerical example is presented in order to discuss the proposed methods' performance.

  10. Energy Efficient Sensor Scheduling with a Mobile Sink Node for the Target Tracking Application

    PubMed Central

    Maheswararajah, Suhinthan; Halgamuge, Saman; Premaratne, Malin

    2009-01-01

    Measurement losses adversely affect the performance of target tracking. The sensor network's life span depends on how efficiently the sensor nodes consume energy. In this paper, we focus on minimizing the total energy consumed by the sensor nodes whilst avoiding measurement losses. Since transmitting data over a long distance consumes a significant amount of energy, a mobile sink node collects the measurements and transmits them to the base station. We assume that the default transmission range of the activated sensor node is limited and it can be increased to maximum range only if the mobile sink node is out-side the default transmission range. Moreover, the active sensor node can be changed after a certain time period. The problem is to select an optimal sensor sequence which minimizes the total energy consumed by the sensor nodes. In this paper, we consider two different problems depend on the mobile sink node's path. First, we assume that the mobile sink node's position is known for the entire time horizon and use the dynamic programming technique to solve the problem. Second, the position of the sink node is varied over time according to a known Markov chain, and the problem is solved by stochastic dynamic programming. We also present sub-optimal methods to solve our problem. A numerical example is presented in order to discuss the proposed methods' performance PMID:22399934

  11. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in

    2016-07-21

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight variousmore » aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.« less

  12. Approximate dynamic programming approaches for appointment scheduling with patient preferences.

    PubMed

    Li, Xin; Wang, Jin; Fung, Richard Y K

    2018-04-01

    During the appointment booking process in out-patient departments, the level of patient satisfaction can be affected by whether or not their preferences can be met, including the choice of physicians and preferred time slot. In addition, because the appointments are sequential, considering future possible requests is also necessary for a successful appointment system. This paper proposes a Markov decision process model for optimizing the scheduling of sequential appointments with patient preferences. In contrast to existing models, the evaluation of a booking decision in this model focuses on the extent to which preferences are satisfied. Characteristics of the model are analysed to develop a system for formulating booking policies. Based on these characteristics, two types of approximate dynamic programming algorithms are developed to avoid the curse of dimensionality. Experimental results suggest directions for further fine-tuning of the model, as well as improving the efficiency of the two proposed algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Penalty Dynamic Programming Algorithm for Dim Targets Detection in Sensor Systems

    PubMed Central

    Huang, Dayu; Xue, Anke; Guo, Yunfei

    2012-01-01

    In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations. PMID:22666074

  14. Performance of a parallel code for the Euler equations on hypercube computers

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.

    1990-01-01

    The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.

  15. Microgravity fluid management requirements of advanced solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Migra, Robert P.

    1987-01-01

    The advanced solar dynamic system (ASDS) program is aimed at developing the technology for highly efficient, lightweight space power systems. The approach is to evaluate Stirling, Brayton and liquid metal Rankine power conversion systems (PCS) over the temperature range of 1025 to 1400K, identify the critical technologies and develop these technologies. Microgravity fluid management technology is required in several areas of this program, namely, thermal energy storage (TES), heat pipe applications and liquid metal, two phase flow Rankine systems. Utilization of the heat of fusion of phase change materials offers potential for smaller, lighter TES systems. The candidate TES materials exhibit large volume change with the phase change. The heat pipe is an energy dense heat transfer device. A high temperature application may transfer heat from the solar receiver to the PCS working fluid and/or TES. A low temperature application may transfer waste heat from the PCS to the radiator. The liquid metal Rankine PCS requires management of the boiling/condensing process typical of two phase flow systems.

  16. Integration of Treatment Innovation Planning and Implementation: Strategic Process Models and Organizational Challenges

    PubMed Central

    Lehman, Wayne E. K.; Simpson, D. Dwayne; Knight, Danica K.; Flynn, Patrick M.

    2015-01-01

    Sustained and effective use of evidence-based practices in substance abuse treatment services faces both clinical and contextual challenges. Implementation approaches are reviewed that rely on variations of plan-do-study-act (PDSA) cycles, but most emphasize conceptual identification of core components for system change strategies. A 2-phase procedural approach is therefore presented based on the integration of TCU models and related resources for improving treatment process and program change. Phase 1 focuses on the dynamics of clinical services, including stages of client recovery (cross-linked with targeted assessments and interventions), as the foundations for identifying and planning appropriate innovations to improve efficiency and effectiveness. Phase 2 shifts to the operational and organizational dynamics involved in implementing and sustaining innovations (including the stages of training, adoption, implementation, and practice). A comprehensive system of TCU assessments and interventions for client and program-level needs and functioning are summarized as well, with descriptions and guidelines for applications in practical settings. PMID:21443294

  17. Dynamic Interaction of Long Suspension Bridges with Running Trains

    NASA Astrophysics Data System (ADS)

    XIA, H.; XU, Y. L.; CHAN, T. H. T.

    2000-10-01

    This paper presents an investigation of dynamic interaction of long suspension bridges with running trains. A three-dimensional finite element model is used to represent a long suspension bridge. Each 4-axle vehicle in a train is modelled by a 27-degrees-of-freedom dynamic system. The dynamic interaction between the bridge and train is realized through the contact forces between the wheels and track. By applying a mode superposition technique to the bridge only and taking the measured track irregularities as known quantities, the number of degrees of freedom (d.o.f.) the bridge-train system is significantly reduced and the coupled equations of motion are efficiently solved. The proposed formulation and the associated computer program are then applied to a real long suspension bridge carrying a railway within the bridge deck. The dynamic response of the bridge-train system and the derail and offload factors related to the running safety of the train are computed. The results show that the formulation presented in this paper can well predict dynamic behaviors of both bridge and train with reasonable computation efforts. Dynamic interaction between the long suspension bridge and train is not significant.

  18. Dynamic programming algorithms for biological sequence comparison.

    PubMed

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  19. Enhanced surveillance strategies for detecting and monitoring chronic wasting disease in free-ranging cervids

    USGS Publications Warehouse

    Walsh, Daniel P.

    2012-01-01

    The purpose of this document is to provide wildlife management agencies with the foundation upon which they can build scientifically rigorous and cost-effective surveillance and monitoring programs for chronic wasting disease (CWD) or refine their existing programs. The first chapter provides an overview of potential demographic and spatial risk factors of susceptible wildlife populations that may be exploited for CWD surveillance and monitoring. The information contained in this chapter explores historic as well as recent developments in our understanding of CWD disease dynamics. It also contains many literature references for readers who may desire a more thorough review of the topics or CWD in general. The second chapter examines methods for enhancing efforts to detect CWD on the landscape where it is not presently known to exist and focuses on the efficiency and cost-effectiveness of the surveillance program. Specifically, it describes the means of exploiting current knowledge of demographic and spatial risk factors, as described in the first chapter, through a two-stage surveillance scheme that utilizes traditional design-based sampling approaches and novel statistical methods to incorporate information about the attributes of the landscape, environment, populations and individual animals into CWD surveillance activities. By accounting for these attributes, efficiencies can be gained and cost-savings can be realized. The final chapter is unique in relation to the first two chapters. Its focus is on designing programs to monitor CWD once it is discovered within a jurisdiction. Unlike the prior chapters that are more detailed or prescriptive, this chapter by design is considerably more general because providing comprehensive direction for creating monitoring programs for jurisdictions without consideration of their monitoring goals, sociopolitical constraints, or their biological systems, is not possible. Therefore, the authors draw upon their collective experiences implementing disease-monitoring programs to present the important questions to consider, potential tools, and various strategies for those wildlife management agencies endeavoring to create or maintain a CWD monitoring program. Its intent is to aid readers in creating efficient and cost-effective monitoring programs, while avoiding potential pitfalls. It is hoped that these three chapters will be useful tools for wildlife managers struggling to implement efficient and effective CWD disease management programs.

  20. Wheat forecast economics effect study. [value of improved information on crop inventories, production, imports and exports

    NASA Technical Reports Server (NTRS)

    Mehra, R. K.; Rouhani, R.; Jones, S.; Schick, I.

    1980-01-01

    A model to assess the value of improved information regarding the inventories, productions, exports, and imports of crop on a worldwide basis is discussed. A previously proposed model is interpreted in a stochastic control setting and the underlying assumptions of the model are revealed. In solving the stochastic optimization problem, the Markov programming approach is much more powerful and exact as compared to the dynamic programming-simulation approach of the original model. The convergence of a dual variable Markov programming algorithm is shown to be fast and efficient. A computer program for the general model of multicountry-multiperiod is developed. As an example, the case of one country-two periods is treated and the results are presented in detail. A comparison with the original model results reveals certain interesting aspects of the algorithms and the dependence of the value of information on the incremental cost function.

  1. A Hybrid Genetic Programming Algorithm for Automated Design of Dispatching Rules.

    PubMed

    Nguyen, Su; Mei, Yi; Xue, Bing; Zhang, Mengjie

    2018-06-04

    Designing effective dispatching rules for production systems is a difficult and timeconsuming task if it is done manually. In the last decade, the growth of computing power, advanced machine learning, and optimisation techniques has made the automated design of dispatching rules possible and automatically discovered rules are competitive or outperform existing rules developed by researchers. Genetic programming is one of the most popular approaches to discovering dispatching rules in the literature, especially for complex production systems. However, the large heuristic search space may restrict genetic programming from finding near optimal dispatching rules. This paper develops a new hybrid genetic programming algorithm for dynamic job shop scheduling based on a new representation, a new local search heuristic, and efficient fitness evaluators. Experiments show that the new method is effective regarding the quality of evolved rules. Moreover, evolved rules are also significantly smaller and contain more relevant attributes.

  2. Parallel Computing Strategies for Irregular Algorithms

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  3. A supplier selection and order allocation problem with stochastic demands

    NASA Astrophysics Data System (ADS)

    Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua

    2011-08-01

    We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.

  4. Time delay and integration array (TDI) using charge transfer device technology. Phase 2, volume 1: Technical

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The 20x9 TDI array was developed to meet the LANDSAT Thematic Mapper Requirements. This array is based upon a self-aligned, transparent gate, buried channel process. The process features: (1) buried channel, four phase, overlapping gate CCD's for high transfer efficiency without fat zero; (2) self-aligned transistors to minimize clock feedthrough and parasitic capacitance; and (3) transparent tin oxide electrode for high quantum efficiency with front surface irradiation. The requirements placed on the array and the performance achieved are summarized. This data is the result of flat field measurements only, no imaging or dynamic target measurements were made during this program. Measurements were performed with two different test stands. The bench test equipment fabricated for this program operated at the 8 micro sec line time and employed simple sampling of the gated MOSFET output video signal. The second stand employed Correlated Doubled Sampling (CDS) and operated at 79.2 micro sec line time.

  5. Evolution of Flexible Multibody Dynamics for Simulation Applications Supporting Human Spaceflight

    NASA Technical Reports Server (NTRS)

    Huynh, An; Brain, Thomas A.; MacLean, John R.; Quiocho, Leslie J.

    2016-01-01

    During the course of transition from the Space Shuttle and International Space Station programs to the Orion and Journey to Mars exploration programs, a generic flexible multibody dynamics formulation and associated software implementation has evolved to meet an ever changing set of requirements at the NASA Johnson Space Center (JSC). Challenging problems related to large transitional topologies and robotic free-flyer vehicle capture/ release, contact dynamics, and exploration missions concept evaluation through simulation (e.g., asteroid surface operations) have driven this continued development. Coupled with this need is the requirement to oftentimes support human spaceflight operations in real-time. Moreover, it has been desirable to allow even more rapid prototyping of on-orbit manipulator and spacecraft systems, to support less complex infrastructure software for massively integrated simulations, to yield further computational efficiencies, and to take advantage of recent advances and availability of multi-core computing platforms. Since engineering analysis, procedures development, and crew familiarity/training for human spaceflight is fundamental to JSC's charter, there is also a strong desire to share and reuse models in both the non-realtime and real-time domains, with the goal of retaining as much multibody dynamics fidelity as possible. Three specific enhancements are reviewed here: (1) linked list organization to address large transitional topologies, (2) body level model order reduction, and (3) parallel formulation/implementation. This paper provides a detailed overview of these primary updates to JSC's flexible multibody dynamics algorithms as well as a comparison of numerical results to previous formulations and associated software.

  6. Emergent mechanics, quantum and un-quantum

    NASA Astrophysics Data System (ADS)

    Ralston, John P.

    2013-10-01

    There is great interest in quantum mechanics as an "emergent" phenomenon. The program holds that nonobvious patterns and laws can emerge from complicated physical systems operating by more fundamental rules. We find a new approach where quantum mechanics itself should be viewed as an information management tool not derived from physics nor depending on physics. The main accomplishment of quantum-style theory comes in expanding the notion of probability. We construct a map from macroscopic information as data" to quantum probability. The map allows a hidden variable description for quantum states, and efficient use of the helpful tools of quantum mechanics in unlimited circumstances. Quantum dynamics via the time-dependent Shroedinger equation or operator methods actually represents a restricted class of classical Hamiltonian or Lagrangian dynamics, albeit with different numbers of degrees of freedom. We show that under wide circumstances such dynamics emerges from structureless dynamical systems. The uses of the quantum information management tools are illustrated by numerical experiments and practical applications

  7. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    PubMed

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  8. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    PubMed Central

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  9. Four dimensional imaging of E. coli nucleoid organization and dynamics in living cells

    PubMed Central

    Fisher, J. K.; Bourniquel, A.; Witz, G.; Weiner, B.; Prentiss, M.; Kleckner, N.

    2013-01-01

    Visualization of living E. coli nucleoids, defined by HupA-mCherry, reveals a discrete, dynamic helical ellipsoid. Three basic features emerge. (i) Nucleoid density efficiently coalesces into longitudinal bundles, giving a stiff, low DNA density ellipsoid. (ii) This ellipsoid is radially confined within the cell cylinder. Radial confinement gives helical shape and drives and directs global nucleoid dynamics, including sister segregation. (iii) Longitudinal density waves flux back and forth along the nucleoid, with 5–10% of density shifting within 5s, enhancing internal nucleoid mobility. Furthermore, sisters separate end-to-end in sequential discontinuous pulses, each elongating the nucleoid by 5–15%. Pulses occur at 20min intervals, at defined cell cycle times. This progression is mediated by sequential installation and release of programmed tethers, implying cyclic accumulation and relief of intra-nucleoid mechanical stress. These effects could comprise a chromosome-based cell cycle engine. Overall, the presented results suggest a general conceptual framework for bacterial nucleoid morphogenesis and dynamics. PMID:23623305

  10. Optimization algorithms for large-scale multireservoir hydropower systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiew, K.L.

    Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another.more » The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.« less

  11. Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs

    NASA Astrophysics Data System (ADS)

    RIngenburg, Michael F.

    Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in controlling output quality while still maintaining significant energy efficiency gains.

  12. Application of Dynamic Analysis in Semi-Analytical Finite Element Method

    PubMed Central

    Oeser, Markus

    2017-01-01

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement’s state. PMID:28867813

  13. Optimizing legacy molecular dynamics software with directive-based offload

    NASA Astrophysics Data System (ADS)

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-10-01

    Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel® Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.

  14. High performance computing in biology: multimillion atom simulations of nanoscale systems

    PubMed Central

    Sanbonmatsu, K. Y.; Tung, C.-S.

    2007-01-01

    Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nanoscale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail. PMID:17187988

  15. Efficient Conformational Sampling in Explicit Solvent Using a Hybrid Replica Exchange Molecular Dynamics Method

    DTIC Science & Technology

    2011-12-01

    REMD while reproducing the energy landscape of explicit solvent simulations . ’ INTRODUCTION Molecular dynamics (MD) simulations of proteins can pro...Mongan, J.; McCammon, J. A. Accelerated molecular dynamics : a promising and efficient simulation method for biomolecules. J. Chem. Phys. 2004, 120 (24...Chemical Theory and Computation ARTICLE (8) Abraham,M. J.; Gready, J. E. Ensuringmixing efficiency of replica- exchange molecular dynamics simulations . J

  16. Turbine blade forced response prediction using FREPS

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha, V.; Morel, Michael R.

    1993-01-01

    This paper describes a software system called FREPS (Forced REsponse Prediction System) that integrates structural dynamic, steady and unsteady aerodynamic analyses to efficiently predict the forced response dynamic stresses in axial flow turbomachinery blades due to aerodynamic and mechanical excitations. A flutter analysis capability is also incorporated into the system. The FREPS system performs aeroelastic analysis by modeling the motion of the blade in terms of its normal modes. The structural dynamic analysis is performed by a finite element code such as MSC/NASTRAN. The steady aerodynamic analysis is based on nonlinear potential theory and the unsteady aerodynamic analyses is based on the linearization of the non-uniform potential flow mean. The program description and presentation of the capabilities are reported herein. The effectiveness of the FREPS package is demonstrated on the High Pressure Oxygen Turbopump turbine of the Space Shuttle Main Engine. Both flutter and forced response analyses are performed and typical results are illustrated.

  17. Dynamic Airspace Configuration

    NASA Technical Reports Server (NTRS)

    Bloem, Michael J.

    2014-01-01

    In air traffic management systems, airspace is partitioned into regions in part to distribute the tasks associated with managing air traffic among different systems and people. These regions, as well as the systems and people allocated to each, are changed dynamically so that air traffic can be safely and efficiently managed. It is expected that new air traffic control systems will enable greater flexibility in how airspace is partitioned and how resources are allocated to airspace regions. In this talk, I will begin by providing an overview of some previous work and open questions in Dynamic Airspace Configuration research, which is concerned with how to partition airspace and assign resources to regions of airspace. For example, I will introduce airspace partitioning algorithms based on clustering, integer programming optimization, and computational geometry. I will conclude by discussing the development of a tablet-based tool that is intended to help air traffic controller supervisors configure airspace and controllers in current operations.

  18. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published severalmore » conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.« less

  19. Vestibular rehabilitation outcomes in chronic vertiginous patients through computerized dynamic visual acuity and Gaze stabilization test.

    PubMed

    Badaracco, Carlo; Labini, Francesca Sylos; Meli, Annalisa; De Angelis, Ezio; Tufarelli, Davide

    2007-09-01

    To evaluate the efficiency of the rehabilitative protocols in patients with labyrinthine hypofunction, focusing on computerized dynamic visual acuity test (DVAt) and Gaze stabilization test (GST) specifically evaluating the vestibulo-oculomotor reflex (VOR) changes due to vestibular rehabilitation. Consecutive sample study. Day hospital in Ears, Nose, and Throat Rehabilitation Unit. Thirty-two patients with chronic dizziness with a mean age of 60.74 years. Patients performed one cycle of 12 daily rehabilitation sessions (2 h each) consisting of exercises aimed at improving VOR gain. The rehabilitation program included substitutional and/or habitudinal exercises, exercises on a stability platform, and exercises on a moving footpath with rehabilitative software. Dizziness Handicap Inventory and Activities-specific Balance Confidence Scale. Computerized dynamic posturography, computerized DVAt, and GST. The patients significantly improved in all the tests. Vestibular rehabilitation improved the quality of life by reducing the handicap index and improving the ability in everyday tasks. The recovery of the vestibular-ocular reflex and vestibular-spinal reflex efficiency was objectively proven by instrumental testing. The DVAt and the GST allow to objectively quantify the fixation ability at higher frequencies and speeds (main VOR function). Moreover, these new parameters permit to completely evaluate vestibular rehabilitation outcomes, adding new information to the generally used tests that only assess vestibulospinal reflex.

  20. HYTESS 2: A Hypothetical Turbofan Engine Simplified Simulation with multivariable control and sensor analytical redundancy

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1986-01-01

    A hypothetical turbofan engine simplified simulation with a multivariable control and sensor failure detection, isolation, and accommodation logic (HYTESS II) is presented. The digital program, written in FORTRAN, is self-contained, efficient, realistic and easily used. Simulated engine dynamics were developed from linearized operating point models. However, essential nonlinear effects are retained. The simulation is representative of the hypothetical, low bypass ratio turbofan engine with an advanced control and failure detection logic. Included is a description of the engine dynamics, the control algorithm, and the sensor failure detection logic. Details of the simulation including block diagrams, variable descriptions, common block definitions, subroutine descriptions, and input requirements are given. Example simulation results are also presented.

  1. Dynamic stability and handling qualities tests on a highly augmented, statically unstable airplane

    NASA Technical Reports Server (NTRS)

    Gera, Joseph; Bosworth, John T.

    1987-01-01

    Initial envelope clearance and subsequent flight testing of a new, fully augmented airplane with an extremely high degree of static instability can place unusual demands on the flight test approach. Previous flight test experience with these kinds of airplanes is very limited or nonexistent. The safe and efficient flight testing may be further complicated by a multiplicity of control effectors that may be present on this class of airplanes. This paper describes some novel flight test and analysis techniques in the flight dynamics and handling qualities area. These techniques were utilized during the initial flight envelope clearance of the X-29A aircraft and were largely responsible for the completion of the flight controls clearance program without any incidents or significant delays.

  2. Space Life Support Engineering Program

    NASA Technical Reports Server (NTRS)

    Seagrave, Richard C.

    1993-01-01

    This report covers the second year of research relating to the development of closed-loop long-term life support systems. Emphasis was directed toward concentrating on the development of dynamic simulation techniques and software and on performing a thermodynamic systems analysis in an effort to begin optimizing the system needed for water purification. Four appendices are attached. The first covers the ASPEN modeling of the closed loop Environmental Control Life Support System (ECLSS) and its thermodynamic analysis. The second is a report on the dynamic model development for water regulation in humans. The third regards the development of an interactive computer-based model for determining exercise limitations. The fourth attachment is an estimate of the second law thermodynamic efficiency of the various units comprising an ECLSS.

  3. Comparative modeling of coevolution in communities of unicellular organisms: adaptability and biodiversity.

    PubMed

    Lashin, Sergey A; Suslov, Valentin V; Matushkin, Yuri G

    2010-06-01

    We propose an original program "Evolutionary constructor" that is capable of computationally efficient modeling of both population-genetic and ecological problems, combining these directions in one model of required detail level. We also present results of comparative modeling of stability, adaptability and biodiversity dynamics in populations of unicellular haploid organisms which form symbiotic ecosystems. The advantages and disadvantages of two evolutionary strategies of biota formation--a few generalists' taxa-based biota formation and biodiversity-based biota formation--are discussed.

  4. Multi-Core Programming Design Patterns: Stream Processing Algorithms for Dynamic Scene Perceptions

    DTIC Science & Technology

    2014-05-01

    processor developed by IBM and other companies , incorpo- rates the verb—POWER5— processor as the Power Processor Element (PPE), one of the early general...deliver an power efficient single-precision peak performance of more than 256 GFlops. Substantially more raw power became available later, when nVIDIA ...algorithms, including IBM’s Cell/B.E., GPUs from NVidia and AMD and many-core CPUs from Intel.27 The vast growth of digital video content has been a

  5. IMAT (Integrated Multidisciplinary Analysis Tool) user's guide for the VAX/VMS computer

    NASA Technical Reports Server (NTRS)

    Meissner, Frances T. (Editor)

    1988-01-01

    The Integrated Multidisciplinary Analysis Tool (IMAT) is a computer software system for the VAX/VMS computer developed at the Langley Research Center. IMAT provides researchers and analysts with an efficient capability to analyze satellite control systems influenced by structural dynamics. Using a menu-driven executive system, IMAT leads the user through the program options. IMAT links a relational database manager to commercial and in-house structural and controls analysis codes. This paper describes the IMAT software system and how to use it.

  6. Multiconfigurational short-range density-functional theory for open-shell systems

    NASA Astrophysics Data System (ADS)

    Hedegârd, Erik Donovan; Toulouse, Julien; Jensen, Hans Jørgen Aagaard

    2018-06-01

    Many chemical systems cannot be described by quantum chemistry methods based on a single-reference wave function. Accurate predictions of energetic and spectroscopic properties require a delicate balance between describing the most important configurations (static correlation) and obtaining dynamical correlation efficiently. The former is most naturally done through a multiconfigurational (MC) wave function, whereas the latter can be done by, e.g., perturbation theory. We have employed a different strategy, namely, a hybrid between multiconfigurational wave functions and density-functional theory (DFT) based on range separation. The method is denoted by MC short-range DFT (MC-srDFT) and is more efficient than perturbative approaches as it capitalizes on the efficient treatment of the (short-range) dynamical correlation by DFT approximations. In turn, the method also improves DFT with standard approximations through the ability of multiconfigurational wave functions to recover large parts of the static correlation. Until now, our implementation was restricted to closed-shell systems, and to lift this restriction, we present here the generalization of MC-srDFT to open-shell cases. The additional terms required to treat open-shell systems are derived and implemented in the DALTON program. This new method for open-shell systems is illustrated on dioxygen and [Fe(H2O)6]3+.

  7. An energy-efficient architecture for internet of things systems

    NASA Astrophysics Data System (ADS)

    De Rango, Floriano; Barletta, Domenico; Imbrogno, Alessandro

    2016-05-01

    In this paper some of the motivations for energy-efficient communications in wireless systems are described by highlighting emerging trends and identifying some challenges that need to be addressed to enable novel, scalable and energy-efficient communications. So an architecture for Internet of Things systems is presented, the purpose of which is to minimize energy consumption by communication devices, protocols, networks, end-user systems and data centers. Some electrical devices have been designed with multiple communication interfaces, such as RF or WiFi, using open source technology; they have been analyzed under different working conditions. Some devices are programmed to communicate directly with a web server, others to communicate only with a special device that acts as a bridge between some devices and the web server. Communication parameters and device status have been changed dynamically according to different scenarios in order to have the most benefits in terms of energy cost and battery lifetime. So the way devices communicate with the web server or between each other and the way they try to obtain the information they need to be always up to date change dynamically in order to guarantee always the lowest energy consumption, a long lasting battery lifetime, the fastest responses and feedbacks and the best quality of service and communication for end users and inner devices of the system.

  8. P-Hint-Hunt: a deep parallelized whole genome DNA methylation detection tool.

    PubMed

    Peng, Shaoliang; Yang, Shunyun; Gao, Ming; Liao, Xiangke; Liu, Jie; Yang, Canqun; Wu, Chengkun; Yu, Wenqiang

    2017-03-14

    The increasing studies have been conducted using whole genome DNA methylation detection as one of the most important part of epigenetics research to find the significant relationships among DNA methylation and several typical diseases, such as cancers and diabetes. In many of those studies, mapping the bisulfite treated sequence to the whole genome has been the main method to study DNA cytosine methylation. However, today's relative tools almost suffer from inaccuracies and time-consuming problems. In our study, we designed a new DNA methylation prediction tool ("Hint-Hunt") to solve the problem. By having an optimal complex alignment computation and Smith-Waterman matrix dynamic programming, Hint-Hunt could analyze and predict the DNA methylation status. But when Hint-Hunt tried to predict DNA methylation status with large-scale dataset, there are still slow speed and low temporal-spatial efficiency problems. In order to solve the problems of Smith-Waterman dynamic programming and low temporal-spatial efficiency, we further design a deep parallelized whole genome DNA methylation detection tool ("P-Hint-Hunt") on Tianhe-2 (TH-2) supercomputer. To the best of our knowledge, P-Hint-Hunt is the first parallel DNA methylation detection tool with a high speed-up to process large-scale dataset, and could run both on CPU and Intel Xeon Phi coprocessors. Moreover, we deploy and evaluate Hint-Hunt and P-Hint-Hunt on TH-2 supercomputer in different scales. The experimental results illuminate our tools eliminate the deviation caused by bisulfite treatment in mapping procedure and the multi-level parallel program yields a 48 times speed-up with 64 threads. P-Hint-Hunt gain a deep acceleration on CPU and Intel Xeon Phi heterogeneous platform, which gives full play of the advantages of multi-cores (CPU) and many-cores (Phi).

  9. Gelator-doped liquid-crystal phase grating with multistable and dynamic modes

    NASA Astrophysics Data System (ADS)

    Lin, Hui-Chi; Yang, Meng-Ru; Tsai, Sheng-Feng; Yan, Shih-Chiang

    2014-01-01

    We demonstrate a gelator-doped nematic liquid-crystal (LC) phase grating, which can be operated in both the multistable mode and the dynamic mode. Thermoreversible association and dissociation of the gelator molecules can vary and fix the multistable diffraction efficiencies of the gratings. A voltage (V) can also be applied to modulate dynamically the diffraction efficiencies of the grating, which behaves as a conventional LC grating. Experimental results show that the variations of the diffraction efficiencies in the multistable and dynamic modes are similar. The maximum diffraction efficiency is approximately 30% at V = 2 V.

  10. Gelator-doped liquid-crystal phase grating with multistable and dynamic modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Hui-Chi, E-mail: huichilin@nfu.edu.tw; Yang, Meng-Ru; Tsai, Sheng-Feng

    2014-01-06

    We demonstrate a gelator-doped nematic liquid-crystal (LC) phase grating, which can be operated in both the multistable mode and the dynamic mode. Thermoreversible association and dissociation of the gelator molecules can vary and fix the multistable diffraction efficiencies of the gratings. A voltage (V) can also be applied to modulate dynamically the diffraction efficiencies of the grating, which behaves as a conventional LC grating. Experimental results show that the variations of the diffraction efficiencies in the multistable and dynamic modes are similar. The maximum diffraction efficiency is approximately 30% at V = 2 V.

  11. Research of an emergency medical system for mass casualty incidents in Shanghai, China: a system dynamics model.

    PubMed

    Yu, Wenya; Lv, Yipeng; Hu, Chaoqun; Liu, Xu; Chen, Haiping; Xue, Chen; Zhang, Lulu

    2018-01-01

    Emergency medical system for mass casualty incidents (EMS-MCIs) is a global issue. However, China lacks such studies extremely, which cannot meet the requirement of rapid decision-support system. This study aims to realize modeling EMS-MCIs in Shanghai, to improve mass casualty incident (MCI) rescue efficiency in China, and to provide a possible method of making rapid rescue decisions during MCIs. This study established a system dynamics (SD) model of EMS-MCIs using the Vensim DSS program. Intervention scenarios were designed as adjusting scales of MCIs, allocation of ambulances, allocation of emergency medical staff, and efficiency of organization and command. Mortality increased with the increasing scale of MCIs, medical rescue capability of hospitals was relatively good, but the efficiency of organization and command was poor, and the prehospital time was too long. Mortality declined significantly when increasing ambulances and improving the efficiency of organization and command; triage and on-site first-aid time were shortened if increasing the availability of emergency medical staff. The effect was the most evident when 2,000 people were involved in MCIs; however, the influence was very small under the scale of 5,000 people. The keys to decrease the mortality of MCIs were shortening the prehospital time and improving the efficiency of organization and command. For small-scale MCIs, improving the utilization rate of health resources was important in decreasing the mortality. For large-scale MCIs, increasing the number of ambulances and emergency medical professionals was the core to decrease prehospital time and mortality. For super-large-scale MCIs, increasing health resources was the premise.

  12. Protein electron transfer: Dynamics and statistics

    NASA Astrophysics Data System (ADS)

    Matyushov, Dmitry V.

    2013-07-01

    Electron transfer between redox proteins participating in energy chains of biology is required to proceed with high energetic efficiency, minimizing losses of redox energy to heat. Within the standard models of electron transfer, this requirement, combined with the need for unidirectional (preferably activationless) transitions, is translated into the need to minimize the reorganization energy of electron transfer. This design program is, however, unrealistic for proteins whose active sites are typically positioned close to the polar and flexible protein-water interface to allow inter-protein electron tunneling. The high flexibility of the interfacial region makes both the hydration water and the surface protein layer act as highly polar solvents. The reorganization energy, as measured by fluctuations, is not minimized, but rather maximized in this region. Natural systems in fact utilize the broad breadth of interfacial electrostatic fluctuations, but in the ways not anticipated by the standard models based on equilibrium thermodynamics. The combination of the broad spectrum of static fluctuations with their dispersive dynamics offers the mechanism of dynamical freezing (ergodicity breaking) of subsets of nuclear modes on the time of reaction/residence of the electron at a redox cofactor. The separation of time-scales of nuclear modes coupled to electron transfer allows dynamical freezing. In particular, the separation between the relaxation time of electro-elastic fluctuations of the interface and the time of conformational transitions of the protein caused by changing redox state results in dynamical freezing of the latter for sufficiently fast electron transfer. The observable consequence of this dynamical freezing is significantly different reorganization energies describing the curvature at the bottom of electron-transfer free energy surfaces (large) and the distance between their minima (Stokes shift, small). The ratio of the two reorganization energies establishes the parameter by which the energetic efficiency of protein electron transfer is increased relative to the standard expectations, thus minimizing losses of energy to heat. Energetically efficient electron transfer occurs in a chain of conformationally quenched cofactors and is characterized by flattened free energy surfaces, reminiscent of the flat and rugged landscape at the stability basin of a folded protein.

  13. Protein electron transfer: Dynamics and statistics.

    PubMed

    Matyushov, Dmitry V

    2013-07-14

    Electron transfer between redox proteins participating in energy chains of biology is required to proceed with high energetic efficiency, minimizing losses of redox energy to heat. Within the standard models of electron transfer, this requirement, combined with the need for unidirectional (preferably activationless) transitions, is translated into the need to minimize the reorganization energy of electron transfer. This design program is, however, unrealistic for proteins whose active sites are typically positioned close to the polar and flexible protein-water interface to allow inter-protein electron tunneling. The high flexibility of the interfacial region makes both the hydration water and the surface protein layer act as highly polar solvents. The reorganization energy, as measured by fluctuations, is not minimized, but rather maximized in this region. Natural systems in fact utilize the broad breadth of interfacial electrostatic fluctuations, but in the ways not anticipated by the standard models based on equilibrium thermodynamics. The combination of the broad spectrum of static fluctuations with their dispersive dynamics offers the mechanism of dynamical freezing (ergodicity breaking) of subsets of nuclear modes on the time of reaction/residence of the electron at a redox cofactor. The separation of time-scales of nuclear modes coupled to electron transfer allows dynamical freezing. In particular, the separation between the relaxation time of electro-elastic fluctuations of the interface and the time of conformational transitions of the protein caused by changing redox state results in dynamical freezing of the latter for sufficiently fast electron transfer. The observable consequence of this dynamical freezing is significantly different reorganization energies describing the curvature at the bottom of electron-transfer free energy surfaces (large) and the distance between their minima (Stokes shift, small). The ratio of the two reorganization energies establishes the parameter by which the energetic efficiency of protein electron transfer is increased relative to the standard expectations, thus minimizing losses of energy to heat. Energetically efficient electron transfer occurs in a chain of conformationally quenched cofactors and is characterized by flattened free energy surfaces, reminiscent of the flat and rugged landscape at the stability basin of a folded protein.

  14. Problems and programming for analysis of IUE high resolution data for variability

    NASA Technical Reports Server (NTRS)

    Grady, C. A.

    1981-01-01

    Observations of variability in stellar winds provide an important probe of their dynamics. It is crucial however to know that any variability seen in a data set can be clearly attributed to the star and not to instrumental or data processing effects. In the course of analysis of IUE high resolution data of alpha Cam and other O, B and Wolf-Rayet stars several effects were found which cause spurious variability or spurious spectral features in our data. Programming was developed to partially compensate for these effects using the Interactive Data language (IDL) on the LASP PDP 11/34. Use of an interactive language such as IDL is particularly suited to analysis of variability data as it permits use of efficient programs coupled with the judgement of the scientist at each stage of processing.

  15. NASA/Army Rotorcraft Transmission Research, a Review of Recent Significant Accomplishments

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    1994-01-01

    A joint helicopter transmission research program between NASA Lewis Research Center and the U.S. Army Research Lab has existed since 1970. Research goals are to reduce weight and noise while increasing life, reliability, and safety. These research goals are achieved by the NASA/Army Mechanical Systems Technology Branch through both in-house research and cooperative research projects with university and industry partners. Some recent significant technical accomplishments produced by this cooperative research are reviewed. The following research projects are reviewed: oil-off survivability of tapered roller bearings, design and evaluation of high contact ratio gearing, finite element analysis of spiral bevel gears, computer numerical control grinding of spiral bevel gears, gear dynamics code validation, computer program for life and reliability of helicopter transmissions, planetary gear train efficiency study, and the Advanced Rotorcraft Transmission (ART) program.

  16. A Flight Control System for Small Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Tunik, A. A.; Nadsadnaya, O. I.

    2018-03-01

    The program adaptation of the controller for the flight control system (FCS) of an unmanned aerial vehicle (UAV) is considered. Linearized flight dynamic models depend mainly on the true airspeed of the UAV, which is measured by the onboard air data system. This enables its use for program adaptation of the FCS over the full range of altitudes and velocities, which define the flight operating range. FCS with program adaptation, based on static feedback (SF), is selected. The SF parameters for every sub-range of the true airspeed are determined using the linear matrix inequality approach in the case of discrete systems for synthesis of a suboptimal robust H ∞-controller. The use of the Lagrange interpolation between true airspeed sub-ranges provides continuous adaptation. The efficiency of the proposed approach is shown against an example of the heading stabilization system.

  17. Complex disruption effect of natural polyphenols on Bcl-2-Bax: molecular dynamics simulation and essential dynamics study.

    PubMed

    Verma, Sharad; Singh, Amit; Mishra, Abha

    2015-01-01

    Apoptosis (programmed cell death) is a process by which cells died after completing physiological function or after a severe genetic damage. Apoptosis is mainly regulated by the Bcl-2 family of proteins. Anti apoptotic protein Bcl-2 prevents the Bax activation/oligomerization to form heterodimer which is responsible for release of the cytochrome c from mitochondria to the cytosol in response to death signal. Quercetin and taxifolin (natural polyphenols) efficiently bound to hydrophobic groove of Bcl-2 and altered the structure by inducing conformational changes. Taxifolin was found more efficient when compared to quercetin in terms of interaction energy and collapse of hydrophobic groove. Taxifolin and quercetin were found to dissociate the Bcl-2-Bax complex during 12 ns MD simulation. The effect of taxifolin and quercetin was, further validated by the MD simulation of ligand-unbound Bcl-2-Bax which showed stability during the simulation. Obatoclax (an inhibitor of Bcl-2) had no significant dissociation effect on Bcl-2-Bax during simulation which favored the previous experimental results and disruption effect of taxifolin and quercetin.

  18. Predicting Flows of Rarefied Gases

    NASA Technical Reports Server (NTRS)

    LeBeau, Gerald J.; Wilmoth, Richard G.

    2005-01-01

    DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.

  19. Trajectory optimization for lunar soft landing with complex constraints

    NASA Astrophysics Data System (ADS)

    Chu, Huiping; Ma, Lin; Wang, Kexin; Shao, Zhijiang; Song, Zhengyu

    2017-11-01

    A unified trajectory optimization framework with initialization strategies is proposed in this paper for lunar soft landing for various missions with specific requirements. Two main missions of interest are Apollo-like Landing from low lunar orbit and Vertical Takeoff Vertical Landing (a promising mobility method) on the lunar surface. The trajectory optimization is characterized by difficulties arising from discontinuous thrust, multi-phase connections, jump of attitude angle, and obstacles avoidance. Here R-function is applied to deal with the discontinuities of thrust, checkpoint constraints are introduced to connect multiple landing phases, attitude angular rate is designed to get rid of radical changes, and safeguards are imposed to avoid collision with obstacles. The resulting dynamic problems are generally with complex constraints. The unified framework based on Gauss Pseudospectral Method (GPM) and Nonlinear Programming (NLP) solver are designed to solve the problems efficiently. Advanced initialization strategies are developed to enhance both the convergence and computation efficiency. Numerical results demonstrate the adaptability of the framework for various landing missions, and the performance of successful solution of difficult dynamic problems.

  20. Knapsack - TOPSIS Technique for Vertical Handover in Heterogeneous Wireless Network

    PubMed Central

    2015-01-01

    In a heterogeneous wireless network, handover techniques are designed to facilitate anywhere/anytime service continuity for mobile users. Consistent best-possible access to a network with widely varying network characteristics requires seamless mobility management techniques. Hence, the vertical handover process imposes important technical challenges. Handover decisions are triggered for continuous connectivity of mobile terminals. However, bad network selection and overload conditions in the chosen network can cause fallout in the form of handover failure. In order to maintain the required Quality of Service during the handover process, decision algorithms should incorporate intelligent techniques. In this paper, a new and efficient vertical handover mechanism is implemented using a dynamic programming method from the operation research discipline. This dynamic programming approach, which is integrated with the Technique to Order Preference by Similarity to Ideal Solution (TOPSIS) method, provides the mobile user with the best handover decisions. Moreover, in this proposed handover algorithm a deterministic approach which divides the network into zones is incorporated into the network server in order to derive an optimal solution. The study revealed that this method is found to achieve better performance and QoS support to users and greatly reduce the handover failures when compared to the traditional TOPSIS method. The decision arrived at the zone gateway using this operational research analytical method (known as the dynamic programming knapsack approach together with Technique to Order Preference by Similarity to Ideal Solution) yields remarkably better results in terms of the network performance measures such as throughput and delay. PMID:26237221

  1. Knapsack--TOPSIS Technique for Vertical Handover in Heterogeneous Wireless Network.

    PubMed

    Malathy, E M; Muthuswamy, Vijayalakshmi

    2015-01-01

    In a heterogeneous wireless network, handover techniques are designed to facilitate anywhere/anytime service continuity for mobile users. Consistent best-possible access to a network with widely varying network characteristics requires seamless mobility management techniques. Hence, the vertical handover process imposes important technical challenges. Handover decisions are triggered for continuous connectivity of mobile terminals. However, bad network selection and overload conditions in the chosen network can cause fallout in the form of handover failure. In order to maintain the required Quality of Service during the handover process, decision algorithms should incorporate intelligent techniques. In this paper, a new and efficient vertical handover mechanism is implemented using a dynamic programming method from the operation research discipline. This dynamic programming approach, which is integrated with the Technique to Order Preference by Similarity to Ideal Solution (TOPSIS) method, provides the mobile user with the best handover decisions. Moreover, in this proposed handover algorithm a deterministic approach which divides the network into zones is incorporated into the network server in order to derive an optimal solution. The study revealed that this method is found to achieve better performance and QoS support to users and greatly reduce the handover failures when compared to the traditional TOPSIS method. The decision arrived at the zone gateway using this operational research analytical method (known as the dynamic programming knapsack approach together with Technique to Order Preference by Similarity to Ideal Solution) yields remarkably better results in terms of the network performance measures such as throughput and delay.

  2. QoS Differential Scheduling in Cognitive-Radio-Based Smart Grid Networks: An Adaptive Dynamic Programming Approach.

    PubMed

    Yu, Rong; Zhong, Weifeng; Xie, Shengli; Zhang, Yan; Zhang, Yun

    2016-02-01

    As the next-generation power grid, smart grid will be integrated with a variety of novel communication technologies to support the explosive data traffic and the diverse requirements of quality of service (QoS). Cognitive radio (CR), which has the favorable ability to improve the spectrum utilization, provides an efficient and reliable solution for smart grid communications networks. In this paper, we study the QoS differential scheduling problem in the CR-based smart grid communications networks. The scheduler is responsible for managing the spectrum resources and arranging the data transmissions of smart grid users (SGUs). To guarantee the differential QoS, the SGUs are assigned to have different priorities according to their roles and their current situations in the smart grid. Based on the QoS-aware priority policy, the scheduler adjusts the channels allocation to minimize the transmission delay of SGUs. The entire transmission scheduling problem is formulated as a semi-Markov decision process and solved by the methodology of adaptive dynamic programming. A heuristic dynamic programming (HDP) architecture is established for the scheduling problem. By the online network training, the HDP can learn from the activities of primary users and SGUs, and adjust the scheduling decision to achieve the purpose of transmission delay minimization. Simulation results illustrate that the proposed priority policy ensures the low transmission delay of high priority SGUs. In addition, the emergency data transmission delay is also reduced to a significantly low level, guaranteeing the differential QoS in smart grid.

  3. Manycast routing, modulation level and spectrum assignment over elastic optical networks

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Zhao, Yang; Chen, Xue; Wang, Lei; Zhang, Min; Zhang, Jie; Ji, Yuefeng; Wang, Huitao; Wang, Taili

    2017-07-01

    Manycast is a point to multi-point transmission framework that requires a subset of destination nodes successfully reached. It is particularly applicable for dealing with large amounts of data simultaneously in bandwidth-hungry, dynamic and cloud-based applications. As rapid increasing of traffics in these applications, the elastic optical networks (EONs) may be relied on to achieve high throughput manycast. In terms of finer spectrum granularity, the EONs could reach flexible accessing to network spectrum and efficient providing exact spectrum resource to demands. In this paper, we focus on the manycast routing, modulation level and spectrum assignment (MA-RMLSA) problem in EONs. Both EONs planning with static manycast traffic and EONs provisioning with dynamic manycast traffic are investigated. An integer linear programming (ILP) model is formulated to derive MA-RMLSA problem in static manycast scenario. Then corresponding heuristic algorithm called manycast routing, modulation level and spectrum assignment genetic algorithm (MA-RMLSA-GA) is proposed to adapt for both static and dynamic manycast scenarios. The MA-RMLSA-GA optimizes MA-RMLSA problem in destination nodes selection, routing light-tree constitution, modulation level allocation and spectrum resource assignment jointly, to achieve an effective improvement in network performance. Simulation results reveal that MA-RMLSA strategies offered by MA-RMLSA-GA have slightly disparity from the optimal solutions provided by ILP model in static scenario. Moreover, the results demonstrate that MA-RMLSA-GA realizes a highly efficient MA-RMLSA strategy with the lowest blocking probability in dynamic scenario compared with benchmark algorithms.

  4. Interactions between Energy Efficiency Programs funded under the Recovery Act and Utility Customer-Funded Energy Efficiency Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldman, Charles A.; Stuart, Elizabeth; Hoffman, Ian

    2011-02-25

    Since the spring of 2009, billions of federal dollars have been allocated to state and local governments as grants for energy efficiency and renewable energy projects and programs. The scale of this American Reinvestment and Recovery Act (ARRA) funding, focused on 'shovel-ready' projects to create and retain jobs, is unprecedented. Thousands of newly funded players - cities, counties, states, and tribes - and thousands of programs and projects are entering the existing landscape of energy efficiency programs for the first time or expanding their reach. The nation's experience base with energy efficiency is growing enormously, fed by federal dollars andmore » driven by broader objectives than saving energy alone. State and local officials made countless choices in developing portfolios of ARRA-funded energy efficiency programs and deciding how their programs would relate to existing efficiency programs funded by utility customers. Those choices are worth examining as bellwethers of a future world where there may be multiple program administrators and funding sources in many states. What are the opportunities and challenges of this new environment? What short- and long-term impacts will this large, infusion of funds have on utility customer-funded programs; for example, on infrastructure for delivering energy efficiency services or on customer willingness to invest in energy efficiency? To what extent has the attribution of energy savings been a critical issue, especially where administrators of utility customer-funded energy efficiency programs have performance or shareholder incentives? Do the new ARRA-funded energy efficiency programs provide insights on roles or activities that are particularly well-suited to state and local program administrators vs. administrators or implementers of utility customer-funded programs? The answers could have important implications for the future of U.S. energy efficiency. This report focuses on a selected set of ARRA-funded energy efficiency programs administered by state energy offices: the State Energy Program (SEP) formula grants, the portion of Energy Efficiency and Conservation Block Grant (EECBG) formula funds administered directly by states, and the State Energy Efficient Appliance Rebate Program (SEEARP). Since these ARRA programs devote significant monies to energy efficiency and serve similar markets as utility customer-funded programs, there are frequent interactions between programs. We exclude the DOE low-income weatherization program and EECBG funding awarded directly to the over 2,200 cities, counties and tribes from our study to keep its scope manageable. We summarize the energy efficiency program design and funding choices made by the 50 state energy offices, 5 territories and the District of Columbia. We then focus on the specific choices made in 12 case study states. These states were selected based on the level of utility customer program funding, diversity of program administrator models, and geographic diversity. Based on interviews with more than 80 energy efficiency actors in those 12 states, we draw observations about states strategies for use of Recovery Act funds. We examine interactions between ARRA programs and utility customer-funded energy efficiency programs in terms of program planning, program design and implementation, policy issues, and potential long-term impacts. We consider how the existing regulatory policy framework and energy efficiency programs in these 12 states may have impacted development of these selected ARRA programs. Finally, we summarize key trends and highlight issues that evaluators of these ARRA programs may want to examine in more depth in their process and impact evaluations.« less

  5. Object-oriented design and implementation of CFDLab: a computer-assisted learning tool for fluid dynamics using dual reciprocity boundary element methodology

    NASA Astrophysics Data System (ADS)

    Friedrich, J.

    1999-08-01

    As lecturers, our main concern and goal is to develop more attractive and efficient ways of communicating up-to-date scientific knowledge to our students and facilitate an in-depth understanding of physical phenomena. Computer-based instruction is very promising to help both teachers and learners in their difficult task, which involves complex cognitive psychological processes. This complexity is reflected in high demands on the design and implementation methods used to create computer-assisted learning (CAL) programs. Due to their concepts, flexibility, maintainability and extended library resources, object-oriented modeling techniques are very suitable to produce this type of pedagogical tool. Computational fluid dynamics (CFD) enjoys not only a growing importance in today's research, but is also very powerful for teaching and learning fluid dynamics. For this purpose, an educational PC program for university level called 'CFDLab 1.1' for Windows™ was developed with an interactive graphical user interface (GUI) for multitasking and point-and-click operations. It uses the dual reciprocity boundary element method as a versatile numerical scheme, allowing to handle a variety of relevant governing equations in two dimensions on personal computers due to its simple pre- and postprocessing including 2D Laplace, Poisson, diffusion, transient convection-diffusion.

  6. Applying graph partitioning methods in measurement-based dynamic load balancing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav; Fourestier, Sebastien; Menon, Harshitha

    Load imbalance leads to an increasing waste of resources as an application is scaled to more and more processors. Achieving the best parallel efficiency for a program requires optimal load balancing which is a NP-hard problem. However, finding near-optimal solutions to this problem for complex computational science and engineering applications is becoming increasingly important. Charm++, a migratable objects based programming model, provides a measurement-based dynamic load balancing framework. This framework instruments and then migrates over-decomposed objects to balance computational load and communication at runtime. This paper explores the use of graph partitioning algorithms, traditionally used for partitioning physical domains/meshes, formore » measurement-based dynamic load balancing of parallel applications. In particular, we present repartitioning methods developed in a graph partitioning toolbox called SCOTCH that consider the previous mapping to minimize migration costs. We also discuss a new imbalance reduction algorithm for graphs with irregular load distributions. We compare several load balancing algorithms using microbenchmarks on Intrepid and Ranger and evaluate the effect of communication, number of cores and number of objects on the benefit achieved from load balancing. New algorithms developed in SCOTCH lead to better performance compared to the METIS partitioners for several cases, both in terms of the application execution time and fewer number of objects migrated.« less

  7. Closed Cycle Engine Program Used in Solar Dynamic Power Testing Effort

    NASA Technical Reports Server (NTRS)

    Ensworth, Clint B., III; McKissock, David B.

    1998-01-01

    NASA Lewis Research Center is testing the world's first integrated solar dynamic power system in a simulated space environment. This system converts solar thermal energy into electrical energy by using a closed-cycle gas turbine and alternator. A NASA-developed analysis code called the Closed Cycle Engine Program (CCEP) has been used for both pretest predictions and post-test analysis of system performance. The solar dynamic power system has a reflective concentrator that focuses solar thermal energy into a cavity receiver. The receiver is a heat exchanger that transfers the thermal power to a working fluid, an inert gas mixture of helium and xenon. The receiver also uses a phase-change material to store the thermal energy so that the system can continue producing power when there is no solar input power, such as when an Earth-orbiting satellite is in eclipse. The system uses a recuperated closed Brayton cycle to convert thermal power to mechanical power. Heated gas from the receiver expands through a turbine that turns an alternator and a compressor. The system also includes a gas cooler and a radiator, which reject waste cycle heat, and a recuperator, a gas-to-gas heat exchanger that improves cycle efficiency by recovering thermal energy.

  8. Advanced secondary batteries: Their applications, technological status, market and opportunity

    NASA Astrophysics Data System (ADS)

    Yao, M.

    1989-03-01

    Program planning for advanced battery energy storage technology is supported within the NEMO Program. Specifically this study had focused on the review of advanced battery applications; the development and demonstration status of leading battery technologies; and potential marketing opportunity. Advanced secondary (or rechargeable) batteries have been under development for the past two decades in the U.S., Japan, and parts of Europe for potential applications in electric utilities and for electric vehicles. In the electric utility applications, the primary aim of a battery energy storage plant is to facilitate peak power load leveling and/or dynamic operations to minimize the overall power generation cost. In the application for peak power load leveling, the battery stores the off-peak base load energy and is discharged during the period of peak power demand. This allows a more efficient use of the base load generation capacity and reduces the need for conventional oil-fired or gas-fire peak power generation equipment. Batteries can facilitate dynamic operations because of their basic characteristics as an electrochemical device capable of instantaneous response to the changing load. Dynamic operating benefits results in cost savings of the overall power plant operation. Battery-powered electric vehicles facilitate conservation of petroleum fuel in the transportation sector, but more importantly, they reduce air pollution in the congested inner cities.

  9. 42 CFR 420.410 - Establishment of a program to collect suggestions for improving Medicare program efficiency and...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... for improving Medicare program efficiency and to reward suggesters for monetary savings. 420.410... Program Efficiency and to Reward Suggesters for Monetary Savings § 420.410 Establishment of a program to collect suggestions for improving Medicare program efficiency and to reward suggesters for monetary...

  10. 42 CFR 420.410 - Establishment of a program to collect suggestions for improving Medicare program efficiency and...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... for improving Medicare program efficiency and to reward suggesters for monetary savings. 420.410... Program Efficiency and to Reward Suggesters for Monetary Savings § 420.410 Establishment of a program to collect suggestions for improving Medicare program efficiency and to reward suggesters for monetary...

  11. A Multi-step Transcriptional and Chromatin State Cascade Underlies Motor Neuron Programming from Embryonic Stem Cells.

    PubMed

    Velasco, Silvia; Ibrahim, Mahmoud M; Kakumanu, Akshay; Garipler, Görkem; Aydin, Begüm; Al-Sayegh, Mohamed Ahmed; Hirsekorn, Antje; Abdul-Rahman, Farah; Satija, Rahul; Ohler, Uwe; Mahony, Shaun; Mazzoni, Esteban O

    2017-02-02

    Direct cell programming via overexpression of transcription factors (TFs) aims to control cell fate with the degree of precision needed for clinical applications. However, the regulatory steps involved in successful terminal cell fate programming remain obscure. We have investigated the underlying mechanisms by looking at gene expression, chromatin states, and TF binding during the uniquely efficient Ngn2, Isl1, and Lhx3 motor neuron programming pathway. Our analysis reveals a highly dynamic process in which Ngn2 and the Isl1/Lhx3 pair initially engage distinct regulatory regions. Subsequently, Isl1/Lhx3 binding shifts from one set of targets to another, controlling regulatory region activity and gene expression as cell differentiation progresses. Binding of Isl1/Lhx3 to later motor neuron enhancers depends on the Ebf and Onecut TFs, which are induced by Ngn2 during the programming process. Thus, motor neuron programming is the product of two initially independent transcriptional modules that converge with a feedforward transcriptional logic. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Modeling Weather Impact on Ground Delay Programs

    NASA Technical Reports Server (NTRS)

    Wang, Yao; Kulkarni, Deepak

    2011-01-01

    Scheduled arriving aircraft demand may exceed airport arrival capacity when there is abnormal weather at an airport. In such situations, Federal Aviation Administration (FAA) institutes ground-delay programs (GDP) to delay flights before they depart from their originating airports. Efficient GDP planning depends on the accuracy of prediction of airport capacity and demand in the presence of uncertainties in weather forecast. This paper presents a study of the impact of dynamic airport surface weather on GDPs. Using the National Traffic Management Log, effect of weather conditions on the characteristics of GDP events at selected busy airports is investigated. Two machine learning methods are used to generate models that map the airport operational conditions and weather information to issued GDP parameters and results of validation tests are described.

  13. Versatile and declarative dynamic programming using pair algebras.

    PubMed

    Steffen, Peter; Giegerich, Robert

    2005-09-12

    Dynamic programming is a widely used programming technique in bioinformatics. In sharp contrast to the simplicity of textbook examples, implementing a dynamic programming algorithm for a novel and non-trivial application is a tedious and error prone task. The algebraic dynamic programming approach seeks to alleviate this situation by clearly separating the dynamic programming recurrences and scoring schemes. Based on this programming style, we introduce a generic product operation of scoring schemes. This leads to a remarkable variety of applications, allowing us to achieve optimizations under multiple objective functions, alternative solutions and backtracing, holistic search space analysis, ambiguity checking, and more, without additional programming effort. We demonstrate the method on several applications for RNA secondary structure prediction. The product operation as introduced here adds a significant amount of flexibility to dynamic programming. It provides a versatile testbed for the development of new algorithmic ideas, which can immediately be put to practice.

  14. Increasing the computational efficient of digital cross correlation by a vectorization method

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  15. Dynamic Discharge Arc Driver. [computerized simulation

    NASA Technical Reports Server (NTRS)

    Dannenberg, R. E.; Slapnicar, P. I.

    1975-01-01

    A computer program using nonlinear RLC circuit analysis was developed to accurately model the electrical discharge performance of the Ames 1-MJ energy storage and arc-driver system. Solutions of circuit parameters are compared with experimental circuit data and related to shock speed measurements. Computer analysis led to the concept of a Dynamic Discharge Arc Driver (DDAD) capable of increasing the range of operation of shock-driven facilities. Utilization of mass addition of the driver gas offers a unique means of improving driver performance. Mass addition acts to increase the arc resistance, which results in better electrical circuit damping with more efficient Joule heating, producing stronger shock waves. Preliminary tests resulted in an increase in shock Mach number from 34 to 39 in air at an initial pressure of 2.5 torr.

  16. The Roles and Regulation of Polycomb Complexes in Neural Development

    PubMed Central

    Corley, Matthew; Kroll, Kristen L.

    2014-01-01

    In the developing mammalian nervous system, common progenitors integrate both cell extrinsic and intrinsic regulatory programs to produce distinct neuronal and glial cell types as development proceeds. This spatiotemporal restriction of neural progenitor differentiation is enforced, in part, by the dynamic reorganization of chromatin into repressive domains by Polycomb Repressive Complexes, effectively limiting the expression of fate-determining genes. Here, we review distinct roles that the Polycomb Repressive Complexes play during neurogenesis and gliogenesis, while also highlighting recent work describing the molecular mechanisms that govern their dynamic activity in neural development. Further investigation of how Polycomb complexes are regulated in neural development will enable more precise manipulation of neural progenitor differentiation, facilitating the efficient generation of specific neuronal and glial cell types for many biological applications. PMID:25367430

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuypers, Marshall A.; Lambert, Gregory Joseph; Moore, Thomas W.

    Chronic infection with Hepatitis C virus (HCV) results in cirrhosis, liver cancer and death. As the nations largest provider of care for HCV, US Veterans Health Administration (VHA) invests extensive resources in the diagnosis and treatment of the disease. This report documents modeling and analysis of HCV treatment dynamics performed for the VHA aimed at improving service delivery efficiency. System dynamics modeling of disease treatment demonstrated the benefits of early detection and the role of comorbidities in disease progress and patient mortality. Preliminary modeling showed that adherence to rigorous treatment protocols is a primary determinant of treatment success. In depthmore » meta-analysis revealed correlations of adherence and various psycho-social factors. This initial meta-analysis indicates areas where substantial improvement in patient outcomes can potentially result from VA programs which incorporate these factors into their design.« less

  18. Las Palmeras Molecular Dynamics: A flexible and modular molecular dynamics code

    NASA Astrophysics Data System (ADS)

    Davis, Sergio; Loyola, Claudia; González, Felipe; Peralta, Joaquín

    2010-12-01

    Las Palmeras Molecular Dynamics (LPMD) is a highly modular and extensible molecular dynamics (MD) code using interatomic potential functions. LPMD is able to perform equilibrium MD simulations of bulk crystalline solids, amorphous solids and liquids, as well as non-equilibrium MD (NEMD) simulations such as shock wave propagation, projectile impacts, cluster collisions, shearing, deformation under load, heat conduction, heterogeneous melting, among others, which involve unusual MD features like non-moving atoms and walls, unstoppable atoms with constant-velocity, and external forces like electric fields. LPMD is written in C++ as a compromise between efficiency and clarity of design, and its architecture is based on separate components or plug-ins, implemented as modules which are loaded on demand at runtime. The advantage of this architecture is the ability to completely link together the desired components involved in the simulation in different ways at runtime, using a user-friendly control file language which describes the simulation work-flow. As an added bonus, the plug-in API (Application Programming Interface) makes it possible to use the LPMD components to analyze data coming from other simulation packages, convert between input file formats, apply different transformations to saved MD atomic trajectories, and visualize dynamical processes either in real-time or as a post-processing step. Individual components, such as a new potential function, a new integrator, a new file format, new properties to calculate, new real-time visualizers, and even a new algorithm for handling neighbor lists can be easily coded, compiled and tested within LPMD by virtue of its object-oriented API, without the need to modify the rest of the code. LPMD includes already several pair potential functions such as Lennard-Jones, Morse, Buckingham, MCY and the harmonic potential, as well as embedded-atom model (EAM) functions such as the Sutton-Chen and Gupta potentials. Integrators to choose include Euler (if only for demonstration purposes), Verlet and Velocity Verlet, Leapfrog and Beeman, among others. Electrostatic forces are treated as another potential function, by default using the plug-in implementing the Ewald summation method. Program summaryProgram title: LPMD Catalogue identifier: AEHG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 509 490 No. of bytes in distributed program, including test data, etc.: 6 814 754 Distribution format: tar.gz Programming language: C++ Computer: 32-bit and 64-bit workstation Operating system: UNIX RAM: Minimum 1024 bytes Classification: 7.7 External routines: zlib, OpenGL Nature of problem: Study of Statistical Mechanics and Thermodynamics of condensed matter systems, as well as kinetics of non-equilibrium processes in the same systems. Solution method: Equilibrium and non-equilibrium molecular dynamics method, Monte Carlo methods. Restrictions: Rigid molecules are not supported. Polarizable atoms and chemical bonds (proteins) either. Unusual features: The program is able to change the temperature of the simulation cell, the pressure, cut regions of the cell, color the atoms by properties, even during the simulation. It is also possible to fix the positions and/or velocity of groups of atoms. Visualization of atoms and some physical properties during the simulation. Additional comments: The program does not only perform molecular dynamics and Monte Carlo simulations, it is also able to filter and manipulate atomic configurations, read and write different file formats, convert between them, evaluate different structural and dynamical properties. Running time: 50 seconds on a 1000-step simulation of 4000 argon atoms, running on a single 2.67 GHz Intel processor.

  19. Integration of treatment innovation planning and implementation: strategic process models and organizational challenges.

    PubMed

    Lehman, Wayne E K; Simpson, D Dwayne; Knight, Danica K; Flynn, Patrick M

    2011-06-01

    Sustained and effective use of evidence-based practices in substance abuse treatment services faces both clinical and contextual challenges. Implementation approaches are reviewed that rely on variations of plan-do-study-act (PDSA) cycles, but most emphasize conceptual identification of core components for system change strategies. A two-phase procedural approach is therefore presented based on the integration of Texas Christian University (TCU) models and related resources for improving treatment process and program change. Phase 1 focuses on the dynamics of clinical services, including stages of client recovery (cross-linked with targeted assessments and interventions), as the foundations for identifying and planning appropriate innovations to improve efficiency and effectiveness. Phase 2 shifts to the operational and organizational dynamics involved in implementing and sustaining innovations (including the stages of training, adoption, implementation, and practice). A comprehensive system of TCU assessments and interventions for client and program-level needs and functioning are summarized as well, with descriptions and guidelines for applications in practical settings. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  20. Supporting Dynamic Spectrum Access in Heterogeneous LTE+ Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luiz A. DaSilva; Ryan E. Irwin; Mike Benonis

    As early as 2014, mobile network operators’ spectral capac- ity is expected to be overwhelmed by the demand brought on by new devices and applications. With Long Term Evo- lution Advanced (LTE+) networks likely as the future one world 4G standard, network operators may need to deploy a Dynamic Spectrum Access (DSA) overlay in Heterogeneous Networks (HetNets) to extend coverage, increase spectrum efficiency, and increase the capacity of these networks. In this paper, we propose three new management frameworks for DSA in an LTE+ HetNet: Spectrum Accountability Client, Cell Spectrum Management, and Domain Spectrum Man- agement. For these spectrum managementmore » frameworks, we define protocol interfaces and operational signaling scenar- ios to support cooperative sensing, spectrum lease manage- ment, and alarm scenarios for rule adjustment. We also quan- tify, through integer programs, the benefits of using DSA in an LTE+ HetNet, that can opportunistically reuse vacant TV and GSM spectrum. Using integer programs, we consider a topology using Geographic Information System data from the Blacksburg, VA metro area to assess the realistic benefits of DSA in an LTE+ HetNet.« less

  1. Enumerating Substituted Benzene Isomers of Tree-Like Chemical Graphs.

    PubMed

    Li, Jinghui; Nagamochi, Hiroshi; Akutsu, Tatsuya

    2018-01-01

    Enumeration of chemical structures is useful for drug design, which is one of the main targets of computational biology and bioinformatics. A chemical graph with no other cycles than benzene rings is called tree-like, and becomes a tree possibly with multiple edges if we contract each benzene ring into a single virtual atom of valence 6. All tree-like chemical graphs with a given tree representation are called the substituted benzene isomers of . When we replace each virtual atom in with a benzene ring to obtain a substituted benzene isomer, distinct isomers of are caused by the difference in arrangements of atom groups around a benzene ring. In this paper, we propose an efficient algorithm that enumerates all substituted benzene isomers of a given tree representation . Our algorithm first counts the number of all the isomers of the tree representation by a dynamic programming method. To enumerate all the isomers, for each , our algorithm then generates the th isomer by backtracking the counting phase of the dynamic programming. We also implemented our algorithm for computational experiments.

  2. 77 FR 14509 - State Energy Program and Energy Efficiency and Conservation Block Grant (EECBG) Program; Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-12

    ... DEPARTMENT OF ENERGY [Docket No. EESEP0216] State Energy Program and Energy Efficiency and Conservation Block Grant (EECBG) Program; Request for Information AGENCY: Office of Energy Efficiency and... (SEP) and Energy Efficiency and Conservation Block Grant (EECBG) program, in support of energy...

  3. Sustained Energy Savings Achieved through Successful Industrial Customer Interaction with Ratepayer Programs: Case Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, Amelie; Hedman, Bruce; Taylor, Robert P.

    Many states have implemented ratepayer-funded programs to acquire energy efficiency as a predictable and reliable resource for meeting existing and future energy demand. These programs have become a fixture in many U.S. electricity and natural gas markets as they help postpone or eliminate the need for expensive generation and transmission investments. Industrial energy efficiency (IEE) is an energy efficiency resource that is not only a low cost option for many of these efficiency programs, but offers productivity and competitive benefits to manufacturers as it reduces their energy costs. However, some industrial customers are less enthusiastic about participating in these programs.more » IEE ratepayer programs suffer low participation by industries across many states today despite a continual increase in energy efficiency program spending across all types of customers, and significant energy efficiency funds can often go unused for industrial customers. This paper provides four detailed case studies of companies that benefited from participation in their utility’s energy efficiency program offerings and highlights the business value brought to them by participation in these programs. The paper is designed both for rate-payer efficiency program administrators interested in improving the attractiveness and effectiveness of industrial efficiency programs for their industrial customers and for industrial customers interested in maximizing the value of participating in efficiency programs.« less

  4. Adaption of a corrector module to the IMP dynamics program

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The corrector module of the RAEIOS program and the IMP dynamics computer program were combined to achieve a date-fitting capability with the more general spacecraft dynamics models of the IMP program. The IMP dynamics program presents models of spacecraft dynamics for satellites with long, flexible booms. The properties of the corrector are discussed and a description is presented of the performance criteria and search logic for parameter estimation. A description is also given of the modifications made to add the corrector to the IMP program. This includes subroutine descriptions, common definitions, definition of input, and a description of output.

  5. Large-scale hydropower system optimization using dynamic programming and object-oriented programming: the case of the Northeast China Power Grid.

    PubMed

    Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R

    2013-01-01

    This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.

  6. Essays on environmental, energy, and natural resource economics

    NASA Astrophysics Data System (ADS)

    Zhang, Fan

    My dissertation focuses on examining the interrelationship among the environment, energy and economic development. In the first essay, I explore the effects of increased uncertainty over future output prices, input costs and productivity levels on intertemporal emission permits trading. In a dynamic programming setting, a permit price is a convex function of each of these three sources of uncertainty. Increased uncertainty about future market conditions increases the expected permit price and causes risk-neutral firms to reduce ex ante emissions to smooth marginal abatement costs over time. Empirical analysis shows that increased price volatility induced by electricity market restructuring could explain 8-11% of the allowances banked during Phase I of the U.S. sulfur dioxide trading program. Numerical simulation suggests that high uncertainty may generate substantial initial compliance costs, thereby deterring new entrants and reducing efficiency; sharp emission spikes are also more likely to occur under industry-wide uncertainty shocks. In the second essay, I examine whether electricity restructuring improves the efficiency of U.S. nuclear power generation. Based on the full sample of 73 investor-owned nuclear plants in the United States from 1992 to 1998, I estimate cross-sectional and longitudinal efficiency changes associated with restructuring, at the plant level. Various modeling strategies are presented to deal with the policy endogeneity bias that high cost plants are more likely to be restructured. Overall, I find a strikingly positive relationship between the multiple steps of restructuring and plant operating efficiency. In the third essay, I estimate the economic impact of China's national land conversion program on local farm-dependent economies. The impact of the program on 14 industrial sectors in Gansu provinces are investigated using an input-output model. Due to regulatory restrictions, the agricultural sector cannot automatically expand or shrink its land requirements in direct proportion to output changes. Therefore, I modify a standard input-output model to incorporate supply constraints on cropping activities. A spatially explicit analysis is also implemented in a geographical information system to capture the heterogeneous land productivity. The net cost of the conservation program is estimated to be a land rent of 487.21 per acre per year (1999).

  7. Investigation of High-alpha Lateral-directional Control Power Requirements for High-performance Aircraft

    NASA Technical Reports Server (NTRS)

    Foster, John V.; Ross, Holly M.; Ashley, Patrick A.

    1993-01-01

    Designers of the next-generation fighter and attack airplanes are faced with the requirements of good high-angle-of-attack maneuverability as well as efficient high speed cruise capability with low radar cross section (RCS) characteristics. As a result, they are challenged with the task of making critical design trades to achieve the desired levels of maneuverability and performance. This task has highlighted the need for comprehensive, flight-validated lateral-directional control power design guidelines for high angles of attack. A joint NASA/U.S. Navy study has been initiated to address this need and to investigate the complex flight dynamics characteristics and controls requirements for high-angle-of-attack lateral-directional maneuvering. A multi-year research program is underway which includes ground-based piloted simulation and flight validation. This paper will give a status update of this program that will include a program overview, description of test methodology and preliminary results.

  8. Investigation of high-alpha lateral-directional control power requirements for high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Foster, John V.; Ross, Holly M.; Ashley, Patrick A.

    1993-01-01

    Designers of the next-generation fighter and attack airplanes are faced with the requirements of good high angle-of-attack maneuverability as well as efficient high speed cruise capability with low radar cross section (RCS) characteristics. As a result, they are challenged with the task of making critical design trades to achieve the desired levels of maneuverability and performance. This task has highlighted the need for comprehensive, flight-validated lateral-directional control power design guidelines for high angles of attack. A joint NASA/U.S. Navy study has been initiated to address this need and to investigate the complex flight dynamics characteristics and controls requirements for high angle-of-attack lateral-directional maneuvering. A multi-year research program is underway which includes groundbased piloted simulation and flight validation. This paper will give a status update of this program that will include a program overview, description of test methodology and preliminary results.

  9. Neural dynamics in reconfigurable silicon.

    PubMed

    Basu, A; Ramakrishnan, S; Petre, C; Koziol, S; Brink, S; Hasler, P E

    2010-10-01

    A neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining the programmability of digital systems. We show measurements from neurons with Hopf bifurcations and integrate and fire neurons, excitatory and inhibitory synapses, passive dendrite cables, coupled spiking neurons, and central pattern generators implemented on the chip. This chip provides a platform for not only simulating detailed neuron dynamics but also uses the same to interface with actual cells in applications such as a dynamic clamp. There are 28 computational analog blocks (CAB), each consisting of ion channels with tunable parameters, synapses, winner-take-all elements, current sources, transconductance amplifiers, and capacitors. There are four other CABs which have programmable bias generators. The programmability is achieved using floating gate transistors with on-chip programming control. The switch matrix for interconnecting the components in CABs also consists of floating-gate transistors. Emphasis is placed on replicating the detailed dynamics of computational neural models. Massive computational area efficiency is obtained by using the reconfigurable interconnect as synaptic weights, resulting in more than 50 000 possible 9-b accurate synapses in 9 mm(2).

  10. Competing dynamic phases of active polymer networks

    NASA Astrophysics Data System (ADS)

    Freedman, Simon; Banerjee, Shiladitya; Dinner, Aaron R.

    Recent experiments on in-vitro reconstituted assemblies of F-actin, myosin-II motors, and cross-linking proteins show that tuning local network properties can changes the fundamental biomechanical behavior of the system. For example, by varying cross-linker density and actin bundle rigidity, one can switch between contractile networks useful for reshaping cells, polarity sorted networks ideal for directed molecular transport, and frustrated networks with robust structural properties. To efficiently investigate the dynamic phases of actomyosin networks, we developed a coarse grained non-equilibrium molecular dynamics simulation of model semiflexible filaments, molecular motors, and cross-linkers with phenomenologically defined interactions. The simulation's accuracy was verified by benchmarking the mechanical properties of its individual components and collective behavior against experimental results at the molecular and network scales. By adjusting the model's parameters, we can reproduce the qualitative phases observed in experiment and predict the protein characteristics where phase crossovers could occur in collective network dynamics. Our model provides a framework for understanding cells' multiple uses of actomyosin networks and their applicability in materials research. Supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.

  11. Runtime Detection of C-Style Errors in UPC Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirkelbauer, P; Liao, C; Panas, T

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less

  12. Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration

    NASA Technical Reports Server (NTRS)

    Groce, Alex; Joshi, Rajeev

    2008-01-01

    Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.

  13. FINITE-STATE APPROXIMATIONS TO DENUMERABLE-STATE DYNAMIC PROGRAMS,

    DTIC Science & Technology

    AIR FORCE OPERATIONS, LOGISTICS), (*INVENTORY CONTROL, DYNAMIC PROGRAMMING), (*DYNAMIC PROGRAMMING, APPROXIMATION(MATHEMATICS)), INVENTORY CONTROL, DECISION MAKING, STOCHASTIC PROCESSES, GAME THEORY, ALGORITHMS, CONVERGENCE

  14. Numerical simulation of hypersonic inlet flows with equilibrium or finite rate chemistry

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao; Hsieh, Kwang-Chung; Shuen, Jian-Shun; Mcbride, Bonnie J.

    1988-01-01

    An efficient numerical program incorporated with comprehensive high temperature gas property models has been developed to simulate hypersonic inlet flows. The computer program employs an implicit lower-upper time marching scheme to solve the two-dimensional Navier-Stokes equations with variable thermodynamic and transport properties. Both finite-rate and local-equilibrium approaches are adopted in the chemical reaction model for dissociation and ionization of the inlet air. In the finite rate approach, eleven species equations coupled with fluid dynamic equations are solved simultaneously. In the local-equilibrium approach, instead of solving species equations, an efficient chemical equilibrium package has been developed and incorporated into the flow code to obtain chemical compositions directly. Gas properties for the reaction products species are calculated by methods of statistical mechanics and fit to a polynomial form for C(p). In the present study, since the chemical reaction time is comparable to the flow residence time, the local-equilibrium model underpredicts the temperature in the shock layer. Significant differences of predicted chemical compositions in shock layer between finite rate and local-equilibrium approaches have been observed.

  15. Optimizing legacy molecular dynamics software with directive-based offload

    DOE PAGES

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; ...

    2015-05-14

    The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less

  16. Modification of the surface adsorption properties of alumina-supported Pd catalysts for the electrocatalytic hydrogenation of phenol.

    PubMed

    Cirtiu, Ciprian Mihai; Hassani, Hicham Oudghiri; Bouchard, Nicolas-Alexandre; Rowntree, Paul A; Ménard, Hugues

    2006-07-04

    The electrocatalytic hydrogenation (ECH) of phenol has been studied using palladium supported on gamma-alumina (10% Pd-Al2O3) catalysts. The catalyst powders were suspended in aqueous supporting electrolyte solutions containing methanol and short-chain aliphatic acids (acetic acid, propionic acid, or butyric acid) and were dynamically circulated through a reticulated vitreous carbon cathode. The efficiency of the hydrogenation process was measured as a function of the total electrolytic charge and was compared for different types of supporting electrolyte and for various solvent compositions. Our results show that these experimental parameters strongly affect the overall ECH efficiency of phenol. The ECH efficiency and yields vary inversely with the quantity of methanol present in the electrolytic solutions, whereas the presence of aliphatic carboxylic acids increased the ECH efficiency in proportion to the chain length of the specific acids employed. In all cases, ECH efficiency was directly correlated with the adsorption properties of phenol onto the Pd-alumina catalyst in the studied electrolyte solution, as measured independently using dynamic adsorption isotherms. It is shown that the alumina surface binds the aliphatic acids via the carboxylate terminations and transforms the catalyst into an organically functionalized material. Temperature-programmed mass spectrometry analysis and diffuse-reflectance infrared spectroscopy measurements confirm that the organic acids are stably bound to the alumina surface below 200 degrees C, with coverages that are independent of the acid chain length. These reproducibly functionalized alumina surfaces control the adsorption/desorption equilibrium of the target phenol molecules and allow us to prepare new electrocatalytic materials to enhance the efficiency of the ECH process. The in situ grafting of specific aliphatic acids on general purpose Pd-alumina catalysts offers a new and flexible mechanism to control the ECH process to enhance the selectivity, efficiency, and yields according to the properties of the specific target molecule.

  17. A Change Impact Analysis to Characterize Evolving Program Behaviors

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam; Person, Suzette; Branchaud, Joshua

    2012-01-01

    Change impact analysis techniques estimate the potential effects of changes made to software. Directed Incremental Symbolic Execution (DiSE) is an intraprocedural technique for characterizing the impact of software changes on program behaviors. DiSE first estimates the impact of the changes on the source code using program slicing techniques, and then uses the impact sets to guide symbolic execution to generate path conditions that characterize impacted program behaviors. DiSE, however, cannot reason about the flow of impact between methods and will fail to generate path conditions for certain impacted program behaviors. In this work, we present iDiSE, an extension to DiSE that performs an interprocedural analysis. iDiSE combines static and dynamic calling context information to efficiently generate impacted program behaviors across calling contexts. Information about impacted program behaviors is useful for testing, verification, and debugging of evolving programs. We present a case-study of our implementation of the iDiSE algorithm to demonstrate its efficiency at computing impacted program behaviors. Traditional notions of coverage are insufficient for characterizing the testing efforts used to validate evolving program behaviors because they do not take into account the impact of changes to the code. In this work we present novel definitions of impacted coverage metrics that are useful for evaluating the testing effort required to test evolving programs. We then describe how the notions of impacted coverage can be used to configure techniques such as DiSE and iDiSE in order to support regression testing related tasks. We also discuss how DiSE and iDiSE can be configured for debugging finding the root cause of errors introduced by changes made to the code. In our empirical evaluation we demonstrate that the configurations of DiSE and iDiSE can be used to support various software maintenance tasks

  18. Comparison of collectors of airborne spray drift. Experiments in a wind tunnel and field measurements.

    PubMed

    Arvidsson, Tommy; Bergström, Lars; Kreuger, Jenny

    2011-06-01

    In this study, the collecting efficiency of different samplers of airborne drift was compared both in wind tunnel and in field experiments. The aim was to select an appropriate sampler for collecting airborne spray drift under field conditions. The wind tunnel study examined three static samplers and one dynamic sampler. The dynamic sampler had the highest overall collecting efficiency. Among the static samplers, the pipe cleaner collector had the highest efficiency. These two samplers were selected for evaluation in the subsequent field study. Results from 29 individual field experiments showed that the pipe cleaner collector on average had a 10% lower collecting efficiency than the dynamic sampler. However, the deposits on the pipe cleaners generally were highest at the 0.5 m level, and for the dynamic sampler at the 1 m level. It was concluded from the wind tunnel part of the study that the amount of drift collected on the static collectors had a more strongly positive correlation with increasing wind speed compared with the dynamic sampler. In the field study, the difference in efficiency between the two types of collector was fairly small. As the difference in collecting efficiency between the different types of sampler was small, the dynamic sampler was selected for further measurements of airborne drift under field conditions owing to its more well-defined collecting area. This study of collecting efficiency of airborne spray drift of static and dynamic samplers under field conditions contributes to increasing knowledge in this field of research. Copyright © 2011 Society of Chemical Industry.

  19. Parallel aeroelastic computations for wing and wing-body configurations

    NASA Technical Reports Server (NTRS)

    Byun, Chansup

    1994-01-01

    The objective of this research is to develop computationally efficient methods for solving fluid-structural interaction problems by directly coupling finite difference Euler/Navier-Stokes equations for fluids and finite element dynamics equations for structures on parallel computers. This capability will significantly impact many aerospace projects of national importance such as Advanced Subsonic Civil Transport (ASCT), where the structural stability margin becomes very critical at the transonic region. This research effort will have direct impact on the High Performance Computing and Communication (HPCC) Program of NASA in the area of parallel computing.

  20. Operation of the Institute for Computer Applications in Science and Engineering

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The ICASE research program is described in detail; it consists of four major categories: (1) efficient use of vector and parallel computers, with particular emphasis on the CDC STAR-100; (2) numerical analysis, with particular emphasis on the development and analysis of basic numerical algorithms; (3) analysis and planning of large-scale software systems; and (4) computational research in engineering and the natural sciences, with particular emphasis on fluid dynamics. The work in each of these areas is described in detail; other activities are discussed, a prognosis of future activities are included.

  1. QuantumOptics.jl: A Julia framework for simulating open quantum systems

    NASA Astrophysics Data System (ADS)

    Krämer, Sebastian; Plankensteiner, David; Ostermann, Laurin; Ritsch, Helmut

    2018-06-01

    We present an open source computational framework geared towards the efficient numerical investigation of open quantum systems written in the Julia programming language. Built exclusively in Julia and based on standard quantum optics notation, the toolbox offers speed comparable to low-level statically typed languages, without compromising on the accessibility and code readability found in dynamic languages. After introducing the framework, we highlight its features and showcase implementations of generic quantum models. Finally, we compare its usability and performance to two well-established and widely used numerical quantum libraries.

  2. Approximation algorithms for scheduling unrelated parallel machines with release dates

    NASA Astrophysics Data System (ADS)

    Avdeenko, T. V.; Mesentsev, Y. A.; Estraykh, I. V.

    2017-01-01

    In this paper we propose approaches to optimal scheduling of unrelated parallel machines with release dates. One approach is based on the scheme of dynamic programming modified with adaptive narrowing of search domain ensuring its computational effectiveness. We discussed complexity of the exact schedules synthesis and compared it with approximate, close to optimal, solutions. Also we explain how the algorithm works for the example of two unrelated parallel machines and five jobs with release dates. Performance results that show the efficiency of the proposed approach have been given.

  3. Demonstration of Efficient Core Heating of Magnetized Fast Ignition in FIREX project

    NASA Astrophysics Data System (ADS)

    Johzaki, Tomoyuki

    2017-10-01

    Extensive theoretical and experimental research in the FIREX ``I project over the past decade revealed that the large angular divergence of the laser generated electron beam is one of the most critical problems inhibiting efficient core heating in electron-driven fast ignition. To solve this problem, beam guiding using externally applied kilo-tesla class magnetic field was proposed, and its feasibility has recently been numerically demonstrated. In 2016, integrated experiments at ILE Osaka University demonstrated core heating efficiencies reaching > 5 % and heated core temperatures of 1.7 keV. In these experiments, a kilo-tesla class magnetic field was applied to a cone-attached Cu(II) oleate spherical solid target by using a laser-driven capacitor-coil. The target was then imploded by G-XII laser and heated by the PW-class LFEX laser. The heating efficiency was evaluated by measuring the number of Cu-K- α photons emitted. The heated core temperature was estimated by the X-ray intensity ratio of Cu Li-like and He-like emission lines. To understand the detailed dynamics of the core heating process, we carried out integrated simulations using the FI3 code system. Effects of magnetic fields on the implosion and electron beam transport, detailed core heating dynamics, and the resultant heating efficiency and core temperature will be presented. I will also discuss the prospect for an ignition-scale design of magnetized fast ignition using a solid ball target. This work is partially supported by JSPA KAKENHI Grant Number JP16H02245, JP26400532, JP15K21767, JP26400532, JP16K05638 and is performed with the support and the auspices of the NIFS Collaboration Research program (NIFS12KUGK057, NIFS15KUGK087).

  4. Research of an emergency medical system for mass casualty incidents in Shanghai, China: a system dynamics model

    PubMed Central

    Liu, Xu; Chen, Haiping; Xue, Chen

    2018-01-01

    Objectives Emergency medical system for mass casualty incidents (EMS-MCIs) is a global issue. However, China lacks such studies extremely, which cannot meet the requirement of rapid decision-support system. This study aims to realize modeling EMS-MCIs in Shanghai, to improve mass casualty incident (MCI) rescue efficiency in China, and to provide a possible method of making rapid rescue decisions during MCIs. Methods This study established a system dynamics (SD) model of EMS-MCIs using the Vensim DSS program. Intervention scenarios were designed as adjusting scales of MCIs, allocation of ambulances, allocation of emergency medical staff, and efficiency of organization and command. Results Mortality increased with the increasing scale of MCIs, medical rescue capability of hospitals was relatively good, but the efficiency of organization and command was poor, and the prehospital time was too long. Mortality declined significantly when increasing ambulances and improving the efficiency of organization and command; triage and on-site first-aid time were shortened if increasing the availability of emergency medical staff. The effect was the most evident when 2,000 people were involved in MCIs; however, the influence was very small under the scale of 5,000 people. Conclusion The keys to decrease the mortality of MCIs were shortening the prehospital time and improving the efficiency of organization and command. For small-scale MCIs, improving the utilization rate of health resources was important in decreasing the mortality. For large-scale MCIs, increasing the number of ambulances and emergency medical professionals was the core to decrease prehospital time and mortality. For super-large-scale MCIs, increasing health resources was the premise. PMID:29440876

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The report is an overview of electric energy efficiency programs. It takes a concise look at what states are doing to encourage energy efficiency and how it impacts electric utilities. Energy efficiency programs began to be offered by utilities as a response to the energy crises of the 1970s. These regulatory-driven programs peaked in the early-1990s and then tapered off as deregulation took hold. Today, rising electricity prices, environmental concerns, and national security issues have renewed interest in increasing energy efficiency as an alternative to additional supply. In response, new methods for administering, managing, and delivering energy efficiency programs aremore » being implemented. Topics covered in the report include: Analysis of the benefits of energy efficiency and key methods for achieving energy efficiency; evaluation of the business drivers spurring increased energy efficiency; Discussion of the major barriers to expanding energy efficiency programs; evaluation of the economic impacts of energy efficiency; discussion of the history of electric utility energy efficiency efforts; analysis of the impact of energy efficiency on utility profits and methods for protecting profitability; Discussion of non-utility management of energy efficiency programs; evaluation of major methods to spur energy efficiency - systems benefit charges, resource planning, and resource standards; and, analysis of the alternatives for encouraging customer participation in energy efficiency programs.« less

  6. Efficient GO2/GH2 Injector Design: A NASA, Industry and University Cooperative Effort

    NASA Technical Reports Server (NTRS)

    Tucker, P. K.; Klem, M. D.; Fisher, S. C.; Santoro, R. J.

    1997-01-01

    Developing new propulsion components in the face of shrinking budgets presents a significant challenge. The technical, schedule and funding issues common to any design/development program are complicated by the ramifications of the continuing decrease in funding for the aerospace industry. As a result, new working arrangements are evolving in the rocket industry. This paper documents a successful NASA, industry, and university cooperative effort to design efficient high performance GO2/GH2 rocket injector elements in the current budget environment. The NASA Reusable Launch Vehicle (RLV) Program initially consisted of three vehicle/engine concepts targeted at achieving single stage to orbit. One of the Rocketdyne propulsion concepts, the RS 2100 engine, used a full-flow staged-combustion cycle. Therefore, the RS 2100 main injector would combust GO2/GH 2 propellants. Early in the design phase, but after budget levels and contractual arrangements had been set the limitations of the current gas/gas injector database were identified. Most of the relevant information was at least twenty years old. Designing high performance injectors to meet the RS 2100 requirements would require the database to be updated and significantly enhanced. However, there was no funding available to address the need for more data. NASA proposed a teaming arrangement to acquire the updated information without additional funds from the RLV Program. A determination of the types and amounts of data needed was made along with test facilities with capabilities to meet the data requirements, budget constraints, and schedule. After several iterations a program was finalized and a team established to satisfy the program goals. The Gas/Gas Injector Technology (GGIT) Program had the overall goal of increasing the ability of the rocket engine community to design efficient high-performance, durable gas/gas injectors relevant to RLV requirements. First, the program would provide Rocketdyne with data on preliminary gas/gas injector designs which would enable discrimination among candidate injector designs. Secondly, the program would enhance the national gas/gas database by obtaining high-quality data that increases the understanding of gas/gas injector physics and is suitable for computational fluid dynamics (CFD) code validation. The third program objective was to validate CFD codes for future gas/gas injector design in the RLV program.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trędak, Przemysław, E-mail: przemyslaw.tredak@fuw.edu.pl; Rudnicki, Witold R.; Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, ul. Pawińskiego 5a, 02-106 Warsaw

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPUmore » to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.« less

  8. 800 Hours of Operational Experience from a 2 kW(sub e) Solar Dynamic System

    NASA Technical Reports Server (NTRS)

    Shaltens, Richard K.; Mason, Lee S.

    1999-01-01

    From December 1994 to September 1998, testing with a 2 kW(sub e) Solar Dynamic power system resulted in 33 individual tests, 886 hours of solar heating, and 783 hours of power generation. Power generation ranged from 400 watts to over 2 kW(sub e), and SD system efficiencies have been measured up to 17 per cent, during simulated low-Earth orbit operation. Further, the turbo-alternator-compressors successfully completed 100 start/stops on foil bearings. Operation was conducted in a large thermal/vacuum facility with a simulated Sun at the NASA Lewis Research Center. The Solar Dynamic system featured a closed Brayton conversion unit integrated with a solar heat receiver, which included thermal energy storage for continuous power output through a typical low-Earth orbit. Two power conversion units and three alternator configurations were used during testing. This paper will review the test program, provide operational and performance data, and review a number of technology issues.

  9. Distributed Aerodynamic Sensing and Processing Toolbox

    NASA Technical Reports Server (NTRS)

    Brenner, Martin; Jutte, Christine; Mangalam, Arun

    2011-01-01

    A Distributed Aerodynamic Sensing and Processing (DASP) toolbox was designed and fabricated for flight test applications with an Aerostructures Test Wing (ATW) mounted under the fuselage of an F-15B on the Flight Test Fixture (FTF). DASP monitors and processes the aerodynamics with the structural dynamics using nonintrusive, surface-mounted, hot-film sensing. This aerodynamic measurement tool benefits programs devoted to static/dynamic load alleviation, body freedom flutter suppression, buffet control, improvement of aerodynamic efficiency through cruise control, supersonic wave drag reduction through shock control, etc. This DASP toolbox measures local and global unsteady aerodynamic load distribution with distributed sensing. It determines correlation between aerodynamic observables (aero forces) and structural dynamics, and allows control authority increase through aeroelastic shaping and active flow control. It offers improvements in flutter suppression and, in particular, body freedom flutter suppression, as well as aerodynamic performance of wings for increased range/endurance of manned/ unmanned flight vehicles. Other improvements include inlet performance with closed-loop active flow control, and development and validation of advanced analytical and computational tools for unsteady aerodynamics.

  10. Three-dimensional structural dynamics of DNA origami Bennett linkages using individual-particle electron tomography

    DOE PAGES

    Lei, Dongsheng; Marras, Alexander E.; Liu, Jianfang; ...

    2018-02-09

    Scaffolded DNA origami has proven to be a powerful and efficient technique to fabricate functional nanomachines by programming the folding of a single-stranded DNA template strand into three-dimensional (3D) nanostructures, designed to be precisely motion-controlled. Although two-dimensional (2D) imaging of DNA nanomachines using transmission electron microscopy and atomic force microscopy suggested these nanomachines are dynamic in 3D, geometric analysis based on 2D imaging was insufficient to uncover the exact motion in 3D. In this paper, we use the individual-particle electron tomography method and reconstruct 129 density maps from 129 individual DNA origami Bennett linkage mechanisms at ~6-14 nm resolution. The statisticalmore » analyses of these conformations lead to understanding the 3D structural dynamics of Bennett linkage mechanisms. Moreover, our effort provides experimental verification of a theoretical kinematics model of DNA origami, which can be used as feedback to improve the design and control of motion via optimized DNA sequences and routing.« less

  11. Three-dimensional structural dynamics of DNA origami Bennett linkages using individual-particle electron tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Dongsheng; Marras, Alexander E.; Liu, Jianfang

    Scaffolded DNA origami has proven to be a powerful and efficient technique to fabricate functional nanomachines by programming the folding of a single-stranded DNA template strand into three-dimensional (3D) nanostructures, designed to be precisely motion-controlled. Although two-dimensional (2D) imaging of DNA nanomachines using transmission electron microscopy and atomic force microscopy suggested these nanomachines are dynamic in 3D, geometric analysis based on 2D imaging was insufficient to uncover the exact motion in 3D. In this paper, we use the individual-particle electron tomography method and reconstruct 129 density maps from 129 individual DNA origami Bennett linkage mechanisms at ~6-14 nm resolution. The statisticalmore » analyses of these conformations lead to understanding the 3D structural dynamics of Bennett linkage mechanisms. Moreover, our effort provides experimental verification of a theoretical kinematics model of DNA origami, which can be used as feedback to improve the design and control of motion via optimized DNA sequences and routing.« less

  12. Classical Molecular Dynamics with Mobile Protons.

    PubMed

    Lazaridis, Themis; Hummer, Gerhard

    2017-11-27

    An important limitation of standard classical molecular dynamics simulations is the inability to make or break chemical bonds. This restricts severely our ability to study processes that involve even the simplest of chemical reactions, the transfer of a proton. Existing approaches for allowing proton transfer in the context of classical mechanics are rather cumbersome and have not achieved widespread use and routine status. Here we reconsider the combination of molecular dynamics with periodic stochastic proton hops. To ensure computational efficiency, we propose a non-Boltzmann acceptance criterion that is heuristically adjusted to maintain the correct or desirable thermodynamic equilibria between different protonation states and proton transfer rates. Parameters are proposed for hydronium, Asp, Glu, and His. The algorithm is implemented in the program CHARMM and tested on proton diffusion in bulk water and carbon nanotubes and on proton conductance in the gramicidin A channel. Using hopping parameters determined from proton diffusion in bulk water, the model reproduces the enhanced proton diffusivity in carbon nanotubes and gives a reasonable estimate of the proton conductance in gramicidin A.

  13. OFF, Open source Finite volume Fluid dynamics code: A free, high-order solver based on parallel, modular, object-oriented Fortran API

    NASA Astrophysics Data System (ADS)

    Zaghi, S.

    2014-07-01

    OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git has been adopted in order to facilitate the collaborative maintenance and improvement of the code; CopyrightsOFF is a free software that anyone can use, copy, distribute, study, change and improve under the GNU Public License version 3. The present paper is a manifesto of OFF code and presents the currently implemented features and ongoing developments. This work is focused on the computational techniques adopted and a detailed description of the main API characteristics is reported. OFF capabilities are demonstrated by means of one and two dimensional examples and a three dimensional real application.

  14. Efficient implementation of the many-body Reactive Bond Order (REBO) potential on GPU

    NASA Astrophysics Data System (ADS)

    Trędak, Przemysław; Rudnicki, Witold R.; Majewski, Jacek A.

    2016-09-01

    The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPU to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.

  15. QuTiP 2: A Python framework for the dynamics of open quantum systems

    NASA Astrophysics Data System (ADS)

    Johansson, J. R.; Nation, P. D.; Nori, Franco

    2013-04-01

    We present version 2 of QuTiP, the Quantum Toolbox in Python. Compared to the preceding version [J.R. Johansson, P.D. Nation, F. Nori, Comput. Phys. Commun. 183 (2012) 1760.], we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Here we introduce these new features, demonstrate their use, and give a summary of the important backward-incompatible API changes introduced in this version. Catalog identifier: AEMB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 33625 No. of bytes in distributed program, including test data, etc.: 410064 Distribution format: tar.gz Programming language: Python. Computer: i386, x86-64. Operating system: Linux, Mac OSX. RAM: 2+ Gigabytes Classification: 7. External routines: NumPy, SciPy, Matplotlib, Cython Catalog identifier of previous version: AEMB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1760 Does the new version supercede the previous version?: Yes Nature of problem: Dynamics of open quantum systems Solution method: Numerical solutions to Lindblad, Floquet-Markov, and Bloch-Redfield master equations, as well as the Monte Carlo wave function method. Reasons for new version: Compared to the preceding version we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Restrictions: Problems must meet the criteria for using the master equation in Lindblad, Floquet-Markov, or Bloch-Redfield form. Running time: A few seconds up to several tens of hours, depending on size of the underlying Hilbert space.

  16. Line-by-line spectroscopic simulations on graphics processing units

    NASA Astrophysics Data System (ADS)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C++ 2005 with Cygwin 1.5.24 under Windows XP. RAM: 1 gigabyte Classification: 21.2 External routines: OpenGL ( http://www.opengl.org) Nature of problem: Simulating radiative transfer on high-temperature high-pressure gases. Solution method: Line-by-line Monte-Carlo ray-tracing. Unusual features: Parallel computations are moved to the GPU. Additional comments: nVidia GeForce 7000 or ATI Radeon X1000 series graphics processing unit is required. Running time: A few minutes.

  17. Trading Rules on Stock Markets Using Genetic Network Programming with Reinforcement Learning and Importance Index

    NASA Astrophysics Data System (ADS)

    Mabu, Shingo; Hirasawa, Kotaro; Furuzuki, Takayuki

    Genetic Network Programming (GNP) is an evolutionary computation which represents its solutions using graph structures. Since GNP can create quite compact programs and has an implicit memory function, it has been clarified that GNP works well especially in dynamic environments. In addition, a study on creating trading rules on stock markets using GNP with Importance Index (GNP-IMX) has been done. IMX is a new element which is a criterion for decision making. In this paper, we combined GNP-IMX with Actor-Critic (GNP-IMX&AC) and create trading rules on stock markets. Evolution-based methods evolve their programs after enough period of time because they must calculate fitness values, however reinforcement learning can change programs during the period, therefore the trading rules can be created efficiently. In the simulation, the proposed method is trained using the stock prices of 10 brands in 2002 and 2003. Then the generalization ability is tested using the stock prices in 2004. The simulation results show that the proposed method can obtain larger profits than GNP-IMX without AC and Buy&Hold.

  18. Evaluation of high-perimeter electrode designs for deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Howell, Bryan; Grill, Warren M.

    2014-08-01

    Objective. Deep brain stimulation (DBS) is an effective treatment for movement disorders and a promising therapy for treating epilepsy and psychiatric disorders. Despite its clinical success, complications including infections and mis-programing following surgical replacement of the battery-powered implantable pulse generator adversely impact the safety profile of this therapy. We sought to decrease power consumption and extend battery life by modifying the electrode geometry to increase stimulation efficiency. The specific goal of this study was to determine whether electrode contact perimeter or area had a greater effect on increasing stimulation efficiency. Approach. Finite-element method (FEM) models of eight prototype electrode designs were used to calculate the electrode access resistance, and the FEM models were coupled with cable models of passing axons to quantify stimulation efficiency. We also measured in vitro the electrical properties of the prototype electrode designs and measured in vivo the stimulation efficiency following acute implantation in anesthetized cats. Main results. Area had a greater effect than perimeter on altering the electrode access resistance; electrode (access or dynamic) resistance alone did not predict stimulation efficiency because efficiency was dependent on the shape of the potential distribution in the tissue; and, quantitative assessment of stimulation efficiency required consideration of the effects of the electrode-tissue interface impedance. Significance. These results advance understanding of the features of electrode geometry that are important for designing the next generation of efficient DBS electrodes.

  19. A dynamic model of functioning of a bank

    NASA Astrophysics Data System (ADS)

    Malafeyev, Oleg; Awasthi, Achal; Zaitseva, Irina; Rezenkov, Denis; Bogdanova, Svetlana

    2018-04-01

    In this paper, we analyze dynamic programming as a novel approach to solve the problem of maximizing the profits of a bank. The mathematical model of the problem and the description of bank's work is described in this paper. The problem is then approached using the method of dynamic programming. Dynamic programming makes sure that the solutions obtained are globally optimal and numerically stable. The optimization process is set up as a discrete multi-stage decision process and solved with the help of dynamic programming.

  20. Vehicle trajectory linearisation to enable efficient optimisation of the constant speed racing line

    NASA Astrophysics Data System (ADS)

    Timings, Julian P.; Cole, David J.

    2012-06-01

    A driver model is presented capable of optimising the trajectory of a simple dynamic nonlinear vehicle, at constant forward speed, so that progression along a predefined track is maximised as a function of time. In doing so, the model is able to continually operate a vehicle at its lateral-handling limit, maximising vehicle performance. The technique used forms a part of the solution to the motor racing objective of minimising lap time. A new approach of formulating the minimum lap time problem is motivated by the need for a more computationally efficient and robust tool-set for understanding on-the-limit driving behaviour. This has been achieved through set point-dependent linearisation of the vehicle model and coupling the vehicle-track system using an intrinsic coordinate description. Through this, the geometric vehicle trajectory had been linearised relative to the track reference, leading to new path optimisation algorithm which can be formed as a computationally efficient convex quadratic programming problem.

  1. Effect of Streamflow Forecast Uncertainty on Real-Time Reservoir Operation

    NASA Astrophysics Data System (ADS)

    Zhao, T.; Cai, X.; Yang, D.

    2010-12-01

    Various hydrological forecast products have been applied to real-time reservoir operation, including deterministic streamflow forecast (DSF), DSF-based probabilistic streamflow forecast (DPSF), and ensemble streamflow forecast (ESF), which represent forecast uncertainty in the form of deterministic forecast error, deterministic forecast error-based uncertainty distribution, and ensemble forecast errors, respectively. Compared to previous studies that treat these forecast products as ad hoc inputs for reservoir operation models, this paper attempts to model the uncertainties involved in the various forecast products and explores their effect on real-time reservoir operation decisions. In hydrology, there are various indices reflecting the magnitude of streamflow forecast uncertainty; meanwhile, few models illustrate the forecast uncertainty evolution process. This research introduces Martingale Model of Forecast Evolution (MMFE) from supply chain management and justifies its assumptions for quantifying the evolution of uncertainty in streamflow forecast as time progresses. Based on MMFE, this research simulates the evolution of forecast uncertainty in DSF, DPSF, and ESF, and applies the reservoir operation models (dynamic programming, DP; stochastic dynamic programming, SDP; and standard operation policy, SOP) to assess the effect of different forms of forecast uncertainty on real-time reservoir operation. Through a hypothetical single-objective real-time reservoir operation model, the results illustrate that forecast uncertainty exerts significant effects. Reservoir operation efficiency, as measured by a utility function, decreases as the forecast uncertainty increases. Meanwhile, these effects also depend on the type of forecast product being used. In general, the utility of reservoir operation with ESF is nearly as high as the utility obtained with a perfect forecast; the utilities of DSF and DPSF are similar to each other but not as efficient as ESF. Moreover, streamflow variability and reservoir capacity can change the magnitude of the effects of forecast uncertainty, but not the relative merit of DSF, DPSF, and ESF. Schematic diagram of the increase in forecast uncertainty with forecast lead-time and the dynamic updating property of real-time streamflow forecast

  2. Efficient computation of optimal actions.

    PubMed

    Todorov, Emanuel

    2009-07-14

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.

  3. De novo protein structure prediction by dynamic fragment assembly and conformational space annealing.

    PubMed

    Lee, Juyong; Lee, Jinhyuk; Sasaki, Takeshi N; Sasai, Masaki; Seok, Chaok; Lee, Jooyoung

    2011-08-01

    Ab initio protein structure prediction is a challenging problem that requires both an accurate energetic representation of a protein structure and an efficient conformational sampling method for successful protein modeling. In this article, we present an ab initio structure prediction method which combines a recently suggested novel way of fragment assembly, dynamic fragment assembly (DFA) and conformational space annealing (CSA) algorithm. In DFA, model structures are scored by continuous functions constructed based on short- and long-range structural restraint information from a fragment library. Here, DFA is represented by the full-atom model by CHARMM with the addition of the empirical potential of DFIRE. The relative contributions between various energy terms are optimized using linear programming. The conformational sampling was carried out with CSA algorithm, which can find low energy conformations more efficiently than simulated annealing used in the existing DFA study. The newly introduced DFA energy function and CSA sampling algorithm are implemented into CHARMM. Test results on 30 small single-domain proteins and 13 template-free modeling targets of the 8th Critical Assessment of protein Structure Prediction show that the current method provides comparable and complementary prediction results to existing top methods. Copyright © 2011 Wiley-Liss, Inc.

  4. Spatial operator algebra for flexible multibody dynamics

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1993-01-01

    This paper presents an approach to modeling the dynamics of flexible multibody systems such as flexible spacecraft and limber space robotic systems. A large number of degrees of freedom and complex dynamic interactions are typical in these systems. This paper uses spatial operators to develop efficient recursive algorithms for the dynamics of these systems. This approach very efficiently manages complexity by means of a hierarchy of mathematical operations.

  5. RichMol: A general variational approach for rovibrational molecular dynamics in external electric fields

    NASA Astrophysics Data System (ADS)

    Owens, Alec; Yachmenev, Andrey

    2018-03-01

    In this paper, a general variational approach for computing the rovibrational dynamics of polyatomic molecules in the presence of external electric fields is presented. Highly accurate, full-dimensional variational calculations provide a basis of field-free rovibrational states for evaluating the rovibrational matrix elements of high-rank Cartesian tensor operators and for solving the time-dependent Schrödinger equation. The effect of the external electric field is treated as a multipole moment expansion truncated at the second hyperpolarizability interaction term. Our fully numerical and computationally efficient method has been implemented in a new program, RichMol, which can simulate the effects of multiple external fields of arbitrary strength, polarization, pulse shape, and duration. Illustrative calculations of two-color orientation and rotational excitation with an optical centrifuge of NH3 are discussed.

  6. A split-step method to include electron–electron collisions via Monte Carlo in multiple rate equation simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel

    2016-10-01

    A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less

  7. Dynamic modeling and optimal joint torque coordination of advanced robotic systems

    NASA Astrophysics Data System (ADS)

    Kang, Hee-Jun

    The development is documented of an efficient dynamic modeling algorithm and the subsequent optimal joint input load coordination of advanced robotic systems for industrial application. A closed-form dynamic modeling algorithm for the general closed-chain robotic linkage systems is presented. The algorithm is based on the transfer of system dependence from a set of open chain Lagrangian coordinates to any desired system generalized coordinate set of the closed-chain. Three different techniques for evaluation of the kinematic closed chain constraints allow the representation of the dynamic modeling parameters in terms of system generalized coordinates and have no restriction with regard to kinematic redundancy. The total computational requirement of the closed-chain system model is largely dependent on the computation required for the dynamic model of an open kinematic chain. In order to improve computational efficiency, modification of an existing open-chain KIC based dynamic formulation is made by the introduction of the generalized augmented body concept. This algorithm allows a 44 pct. computational saving over the current optimized one (O(N4), 5995 when N = 6). As means of resolving redundancies in advanced robotic systems, local joint torque optimization is applied for effectively using actuator power while avoiding joint torque limits. The stability problem in local joint torque optimization schemes is eliminated by using fictitious dissipating forces which act in the necessary null space. The performance index representing the global torque norm is shown to be satisfactory. In addition, the resulting joint motion trajectory becomes conservative, after a transient stage, for repetitive cyclic end-effector trajectories. The effectiveness of the null space damping method is shown. The modular robot, which is built of well defined structural modules from a finite-size inventory and is controlled by one general computer system, is another class of evolving, highly versatile, advanced robotic systems. Therefore, finally, a module based dynamic modeling algorithm is presented for the dynamic coordination of such reconfigurable modular robotic systems. A user interactive module based manipulator analysis program (MBMAP) has been coded in C language running on 4D/70 Silicon Graphics.

  8. Dynamic Optimization

    NASA Technical Reports Server (NTRS)

    Laird, Philip

    1992-01-01

    We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.

  9. Accessorizing Building Science – A Web Platform to Support Multiple Market Transformation Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madison, Michael C.; Antonopoulos, Chrissi A.; Dowson, Scott T.

    As demand for improved energy efficiency in homes increases, builders need information on the latest findings in building science, rapidly ramping-up energy codes, and technical requirements for labeling programs. The Building America Solution Center is a Department of Energy (DOE) website containing hundreds of expert guides designed to help residential builders install efficiency measures in new and existing homes. Builders can package measures with other media for customized content. Website content provides technical support to market transformation programs such as ENERGY STAR and has been cloned and adapted to provide content for the Better Buildings Residential Program. The Solution Centermore » uses the Drupal open source content management platform to combine a variety of media in an interactive manner to make information easily accessible. Developers designed a unique taxonomy to organize and manage content. That taxonomy was translated into web-based modules that allow users to rapidly traverse structured content with related topics, and media. We will present information on the current design of the Solution Center and the underlying technology used to manage the content. The paper will explore development of features, such as “Field Kits” that allow users to bundle and save content for quick access, along with the ability to export PDF versions of content. Finally, we will discuss development of an Android based mobile application, and a visualization tool for interacting with Building Science Publications that allows the user to dynamically search the entire Building America Library.« less

  10. Blade Vibration Measurement System for Unducted Fans

    NASA Technical Reports Server (NTRS)

    Marscher, William

    2014-01-01

    With propulsion research programs focused on new levels of efficiency and noise reduction, two avenues for advanced gas turbine technology are emerging: the geared turbofan and ultrahigh bypass ratio fan engines. Both of these candidates are being pursued as collaborative research projects between NASA and the engine manufacturers. The high bypass concept from GE Aviation is an unducted fan that features a bypass ratio of over 30 along with the accompanying benefits in fuel efficiency. This project improved the test and measurement capabilities of the unducted fan blade dynamic response. In the course of this project, Mechanical Solutions, Inc. (MSI) collaborated with GE Aviation to (1) define the requirements for fan blade measurements; (2) leverage MSI's radar-based system for compressor and turbine blade monitoring; and (3) develop, validate, and deliver a noncontacting blade vibration measurement system for unducted fans.

  11. Turbulent Radiation Effects in HSCT Combustor Rich Zone

    NASA Technical Reports Server (NTRS)

    Hall, Robert J.; Vranos, Alexander; Yu, Weiduo

    1998-01-01

    A joint UTRC-University of Connecticut theoretical program was based on describing coupled soot formation and radiation in turbulent flows using stretched flamelet theory. This effort was involved with using the model jet fuel kinetics mechanism to predict soot growth in flamelets at elevated pressure, to incorporate an efficient model for turbulent thermal radiation into a discrete transfer radiation code, and to couple die soot growth, flowfield, and radiation algorithm. The soot calculations used a recently developed opposed jet code which couples the dynamical equations of size-class dependent particle growth with complex chemistry. Several of the tasks represent technical firsts; among these are the prediction of soot from a detailed jet fuel kinetics mechanism, the inclusion of pressure effects in the soot particle growth equations, and the inclusion of the efficient turbulent radiation algorithm in a combustor code.

  12. Steady-state and dynamic evaluation of the electric propulsion system test bed vehicle on a road load simulator

    NASA Technical Reports Server (NTRS)

    Dustin, M. O.

    1983-01-01

    The propulsion system of the Lewis Research Center's electric propulsion system test bed vehicle was tested on the road load simulator under the DOE Electric and Hybrid Vehicle Program. This propulsion system, consisting of a series-wound dc motor controlled by an infinitely variable SCR chopper and an 84-V battery pack, is typical of those used in electric vehicles made in 1976. Steady-state tests were conducted over a wide range of differential output torques and vehicle speeds. Efficiencies of all of the components were determined. Effects of temperature and voltage variations on the motor and the effect of voltage changes on the controller were examined. Energy consumption and energy efficiency for the system were determined over the B and C driving schedules of the SAE J227a test procedure.

  13. Efficient dynamic simulation for multiple chain robotic mechanisms

    NASA Technical Reports Server (NTRS)

    Lilly, Kathryn W.; Orin, David E.

    1989-01-01

    An efficient O(mN) algorithm for dynamic simulation of simple closed-chain robotic mechanisms is presented, where m is the number of chains, and N is the number of degrees of freedom for each chain. It is based on computation of the operational space inertia matrix (6 x 6) for each chain as seen by the body, load, or object. Also, computation of the chain dynamics, when opened at one end, is required, and the most efficient algorithm is used for this purpose. Parallel implementation of the dynamics for each chain results in an O(N) + O(log sub 2 m+1) algorithm.

  14. Dynamical efficiency of collisionless magnetized shocks in relativistic jets

    NASA Astrophysics Data System (ADS)

    Aloy, Miguel A.; Mimica, Petar

    2011-09-01

    The so-called internal shock model aims to explain the light-curves and spectra produced by non-thermal processes originated in the flow of blazars and gamma-ray bursts. A long standing question is whether the tenuous collisionless shocks, driven inside a relativistic flow, are efficient enough to explain the amount of energy observed as compared with the expected kinetic power of the outflow. In this work we study the dynamic efficiency of conversion of kinetic-to-thermal/magnetic energy of internal shocks in relativistic magnetized outflows. We find that the collision between shells with a non-zero relative velocity can yield either two oppositely moving shocks (in the frame where the contact surface is at rest), or a reverse shock and a forward rarefaction. For moderately magnetized shocks (magnetization σ ~= 0.1), the dynamic efficiency in a single two-shell interaction can be as large as 40%. Hence, the dynamic efficiency of moderately magnetized shocks is larger than in the corresponding unmagnetized two-shell interaction. We find that the efficiency is only weakly dependent on the Lorentz factor of the shells and, thus internal shocks in the magnetized flow of blazars and gamma-ray bursts are approximately equally efficient.

  15. Effects of evacuation assistant’s leading behavior on the evacuation efficiency: Information transmission approach

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Lu; Guo, Wei; Zheng, Xiao-Ping

    2015-07-01

    Evacuation assistants are expected to spread the escape route information and lead evacuees toward the exit as quickly as possible. Their leading behavior influences the evacuees’ movement directly, which is confirmed to be a decisive factor of the evacuation efficiency. The transmission process of escape information and its function on the evacuees’ movement are accurately presented by the proposed extended dynamic communication field model. For evacuation assistants and evacuees, their sensitivity parameter of static floor field (SFF), , and , are fully discussed. The simulation results indicate that the appropriate is associated with the maximum of evacuees. The optimal combinations of and were found to reach the highest evacuation efficiency. There also exists an optimal value for evacuation assistants’ information transmission radius. Project supported by the National Basic Research Program of China (Grant No. 2011CB706900), the National Natural Science Foundation of China (Grant Nos. 71225007 and 71203006), the National Key Technology Research and Development Program of the Ministry of Science and Technology of China (Grant No. 2012BAK13B06), the Humanities and Social Sciences Project of the Ministry of Education of China (Grant Nos. 10YJA630221 and 12YJCZH023), and the Beijing Philosophy and Social Sciences Planning Project of the Twelfth Five-Year Plan, China (Grant Nos. 12JGC090 and 12JGC098).

  16. Analysis of structural dynamic data from Skylab. Volume 1: Technical discussion

    NASA Technical Reports Server (NTRS)

    Demchak, L.; Harcrow, H.

    1976-01-01

    A compendium of Skylab structural dynamics analytical and test programs is presented. These programs are assessed to identify lessons learned from the structural dynamic prediction effort and to provide guidelines for future analysts and program managers of complex spacecraft systems. It is a synopsis of the structural dynamic effort performed under the Skylab Integration contract and specifically covers the development, utilization, and correlation of Skylab Dynamic Orbital Models.

  17. Bellman's GAP--a language and compiler for dynamic programming in sequence analysis.

    PubMed

    Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert

    2013-03-01

    Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman's GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. In Bellman's GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman's GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman's GAP as an implementation platform of 'real-world' bioinformatics tools. Bellman's GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics.

  18. High-efficient and high-content cytotoxic recording via dynamic and continuous cell-based impedance biosensor technology.

    PubMed

    Hu, Ning; Fang, Jiaru; Zou, Ling; Wan, Hao; Pan, Yuxiang; Su, Kaiqi; Zhang, Xi; Wang, Ping

    2016-10-01

    Cell-based bioassays were effective method to assess the compound toxicity by cell viability, and the traditional label-based methods missed much information of cell growth due to endpoint detection, while the higher throughputs were demanded to obtain dynamic information. Cell-based biosensor methods can dynamically and continuously monitor with cell viability, however, the dynamic information was often ignored or seldom utilized in the toxin and drug assessment. Here, we reported a high-efficient and high-content cytotoxic recording method via dynamic and continuous cell-based impedance biosensor technology. The dynamic cell viability, inhibition ratio and growth rate were derived from the dynamic response curves from the cell-based impedance biosensor. The results showed that the biosensors has the dose-dependent manners to diarrhetic shellfish toxin, okadiac acid based on the analysis of the dynamic cell viability and cell growth status. Moreover, the throughputs of dynamic cytotoxicity were compared between cell-based biosensor methods and label-based endpoint methods. This cell-based impedance biosensor can provide a flexible, cost and label-efficient platform of cell viability assessment in the shellfish toxin screening fields.

  19. Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.

    PubMed

    Kubitzki, Marcus B; de Groot, Bert L

    2007-06-15

    Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.

  20. Static and dynamic efficiency of irreversible health care investments under alternative payment rules.

    PubMed

    Levaggi, R; Moretto, M; Pertile, P

    2012-01-01

    The paper studies the incentive for providers to invest in new health care technologies under alternative payment systems, when the patients' benefits are uncertain. If the reimbursement by the purchaser includes both a variable (per patient) and a lump-sum component, efficiency can be ensured both in the timing of adoption (dynamic) and the intensity of use of the technology (static). If the second instrument is unavailable, a trade-off may emerge between static and dynamic efficiency. In this context, we also discuss how the regulator could use control of the level of uncertainty faced by the provider as an instrument to mitigate the trade-off between static and dynamic efficiency. Finally, we calibrate the model to study a specific technology and estimate the cost of a regulatory failure. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. How much can we gain from improved efficiency? An examination of performance of national HIV/AIDS programs and its determinants in low- and middle-income countries

    PubMed Central

    2012-01-01

    Background The economic downturn exacerbates the inadequacy of resources for combating the worldwide HIV/AIDS pandemic and amplifies the need to improve the efficiency of HIV/AIDS programs. Methods We used data envelopment analysis (DEA) to evaluate efficiency of national HIV/AIDS programs in transforming funding into services and implemented a Tobit model to identify determinants of the efficiency in 68 low- and middle-income countries. We considered the change from the lowest quartile to the average value of a variable a "notable" increase. Results Overall, the average efficiency in implementing HIV/AIDS programs was moderate (49.8%). Program efficiency varied enormously among countries with means by quartile of efficiency of 13.0%, 36.4%, 54.4% and 96.5%. A country's governance, financing mechanisms, and economic and demographic characteristics influence the program efficiency. For example, if countries achieved a notable increase in "voice and accountability" (e.g., greater participation of civil society in policy making), the efficiency of their HIV/AIDS programs would increase by 40.8%. For countries in the lowest quartile of per capita gross national income (GNI), a notable increase in per capita GNI would increase the efficiency of AIDS programs by 45.0%. Conclusions There may be substantial opportunity for improving the efficiency of AIDS services, by providing more services with existing resources. Actions beyond the health sector could be important factors affecting HIV/AIDS service delivery. PMID:22443135

  2. Waste management with recourse: an inexact dynamic programming model containing fuzzy boundary intervals in objectives and constraints.

    PubMed

    Tan, Q; Huang, G H; Cai, Y P

    2010-09-01

    The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives. 2010 Elsevier Ltd. All rights reserved.

  3. The Effect of Sulfur Substitution on the Excited-State Dynamics of DNA and RNA Base Derivatives

    NASA Astrophysics Data System (ADS)

    Pollum, Marvin; Crespo-Hernández, Carlos E.

    2014-06-01

    Substitution of oxygen by a sulfur atom in the natural DNA and RNA bases gives rise to a family of derivatives commonly known as the thiobases. Upon excitation with UV radiation, the natural bases are able to quickly and efficiently dissipate the imparted energy as heat to their surroundings. Thiobases, on the other hand, relax into a long-lived triplet excited state in quantum yields that approach unity. This finding has both fundamental and biological relevance because the triplet state plays a foremost role in the photochemistry of the thiobases, this is especially important in the current medicinal applications of thiobase derivatives. Using femtosecond transient absorption spectroscopy, we are able uncover the ultrafast dynamics leading to the population of this reactive triplet state. In particular, I will present our results on how the site of sulfur substitution and the degree of substitution impact these dynamics and I will compare these experimental results to some recent computational work. Pinning down the excited-state dynamics of the thiobases is important to furthering the understanding of dynamics in natural DNA/RNA bases, as well as to the discovery of thiobase derivatives with desirable therapeutic properties. The authors acknowledge the CAREER program of the National Science Foundation (Grant No. CHE-1255084) for financial support.

  4. Indium phosphide solar cell research in the United States: Comparison with non-photovoltaic sources

    NASA Technical Reports Server (NTRS)

    Weinberg, I.; Swartz, C. K.; Hart, R. E., Jr.

    1989-01-01

    Highlights of the InP solar cell research program are presented. Homojunction cells with efficiencies approaching 19 percent are demonstrated, while 17 percent is achieved for ITO/InP cells. The superior radiation resistance of the two latter cell configurations over both Si and GaAs cells has been shown. InP cells aboard the LIPS3 satellite show no degradation after more than a year in orbit. Computed array specific powers are used to compare the performance of an InP solar cell array to solar dynamic and nuclear systems.

  5. Investing to Survive in a Duopoly Model

    NASA Astrophysics Data System (ADS)

    Pinto, Alberto A.; Oliveira, Bruno M. P. M.; Ferreira, Fernanda A.; Ferreira, Miguel

    We present deterministic dynamics on the production costs of Cournot competitions, based on perfect Nash equilibria of nonlinear R&D investment strategies to reduce the production costs of the firms at every period of the game. We analyse the effects that the R&D investment strategies can have in the profits of the firms along the time. We show that small changes in the initial production costs or small changes in the parameters that determine the efficiency of the R&D programs or of the firms can produce strong economic effects in the long run of the profits of the firms.

  6. Electronic properties of semiconductor-water interfaces: Predictions from ab-initio molecular dynamics and many-body perturbation theory

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Anh

    2015-03-01

    Photoelectrochemical cells offer a promising avenue for hydrogen production from water and sunlight. The efficiency of these devices depends on the electronic structure of the interface between the photoelectrode and liquid water, including the alignment between the semiconductor band edges and the water redox potential. In this talk, we will present the results of first principles calculations of semiconductor-water interfaces that are obtained with a combination of density functional theory (DFT)-based molecular dynamics simulations and many-body perturbation theory (MBPT). First, we will discuss the development of an MBPT approach that is aimed at improving the efficiency and accuracy of existing methodologies while still being applicable to complex heterogeneous interfaces consisting of hundreds of atoms. We will then present studies of the electronic structure of liquid water and aqueous solutions using MBPT, which represent an essential step in establishing a quantitative framework for computing the energy alignment at semiconductor-water interfaces. Finally, using a combination of DFT-based molecular dynamics simulations and MBPT, we will describe the relationship between interfacial structure, electronic properties of semiconductors and their reactivity in aqueous solutions through a number of examples, including functionalized Si surfaces and GaP/InP surfaces in contact with liquid water. T.A.P was supported by the U.S. Department of Energy at the Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and by the Lawrence Fellowship Program.

  7. A NASTRAN-based computer program for structural dynamic analysis of Horizontal Axis Wind Turbines

    NASA Technical Reports Server (NTRS)

    Lobitz, Don W.

    1995-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWT's). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower end rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWT's driven by turbulent winds.

  8. Adaptive dynamic programming approach to experience-based systems identification and control.

    PubMed

    Lendaris, George G

    2009-01-01

    Humans have the ability to make use of experience while selecting their control actions for distinct and changing situations, and their process speeds up and have enhanced effectiveness as more experience is gained. In contrast, current technological implementations slow down as more knowledge is stored. A novel way of employing Approximate (or Adaptive) Dynamic Programming (ADP) is described that shifts the underlying Adaptive Critic type of Reinforcement Learning method "up a level", away from designing individual (optimal) controllers to that of developing on-line algorithms that efficiently and effectively select designs from a repository of existing controller solutions (perhaps previously developed via application of ADP methods). The resulting approach is called Higher-Level Learning Algorithm. The approach and its rationale are described and some examples of its application are given. The notions of context and context discernment are important to understanding the human abilities noted above. These are first defined, in a manner appropriate to controls and system-identification, and as a foundation relating to the application arena, a historical view of the various phases during development of the controls field is given, organized by how the notion 'context' was, or was not, involved in each phase.

  9. Dynamic Decision Making under Uncertainty and Partial Information

    DTIC Science & Technology

    2017-01-30

    order to address these problems, we investigated efficient computational methodologies for dynamic decision making under uncertainty and partial...information. In the course of this research, we developed and studied efficient simulation-based methodologies for dynamic decision making under...uncertainty and partial information; (ii) studied the application of these decision making models and methodologies to practical problems, such as those

  10. Review of Evaluation, Measurement and Verification Approaches Used to Estimate the Load Impacts and Effectiveness of Energy Efficiency Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill

    Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase frommore » $3.1 billion in 2008 to $$7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy Efficiency (2007) presented commonly used definitions for EM&V in the context of energy efficiency programs: (1) Evaluation (E) - The performance of studies and activities aimed at determining the effects and effectiveness of EE programs; (2) Measurement and Verification (M&V) - Data collection, monitoring, and analysis associated with the calculation of gross energy and demand savings from individual measures, sites or projects. M&V can be a subset of program evaluation; and (3) Evaluation, Measurement, and Verification (EM&V) - This term is frequently seen in evaluation literature. EM&V is a catchall acronym for determining both the effectiveness of program designs and estimates of load impacts at the portfolio, program and project level. This report is a scoping study that assesses current practices and methods in the evaluation, measurement and verification (EM&V) of ratepayer-funded energy efficiency programs, with a focus on methods and practices currently used for determining whether projected (ex-ante) energy and demand savings have been achieved (ex-post). M&V practices for privately-funded energy efficiency projects (e.g., ESCO projects) or programs where the primary focus is greenhouse gas reductions were not part of the scope of this study. We identify and discuss key purposes and uses of current evaluations of end-use energy efficiency programs, methods used to evaluate these programs, processes used to determine those methods; and key issues that need to be addressed now and in the future, based on discussions with regulatory agencies, policymakers, program administrators, and evaluation practitioners in 14 states and national experts in the evaluation field. We also explore how EM&V may evolve in a future in which efficiency funding increases significantly, innovative mechanisms for rewarding program performance are adopted, the role of efficiency in greenhouse gas mitigation is more closely linked, and programs are increasingly funded from multiple sources often with multiple program administrators and intended to meet multiple purposes.« less

  11. DyNAvectors: dynamic constitutional vectors for adaptive DNA transfection.

    PubMed

    Clima, Lilia; Peptanariu, Dragos; Pinteala, Mariana; Salic, Adrian; Barboiu, Mihail

    2015-12-25

    Dynamic constitutional frameworks, based on squalene, PEG and PEI components, reversibly connected to core centers, allow the efficient identification of adaptive vectors for good DNA transfection efficiency and are well tolerated by mammalian cells.

  12. Mixing, Noise and Thrust Benefits Using Corrugated Designs

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H., III; Gilinsky, Mikhail M.

    2000-01-01

    These projects are directed toward the analysis of several concepts for nozzle and inlet performance improvement and noise reduction from jet exhausts. Currently. The FM&AL also initiates new joint research between the HU/FM&AL, the Hyper-X Program Team at the LaRC, and the Central Institute of Aviation Motors (CIAM), Moscow, Russia in the field of optimization of fuel injection and mixing in air-breathing propulsion systems. The main results of theoretical, numerical simulation and experimental tests obtained in the previous research are in the papers and patents. The goals of the 14U/FM&AL programs are twofold: 1) to improve the working efficiency of the HU/FM&AL team in generating new innovative ideas and in conducting research in the field of fluid dynamics and acoustics, basically for improvement of supersonic and subsonic aircraft engines, and 2) to attract promising minority students to this research and training and, in cooperation with other HU departments, to teach them basic knowledge in Aerodynamics, Gas Dynamics, and Theoretical and Experimental Methods in Aeroacoustics and Computational Fluid Dynamics (CFD). The research at the HU/FM&AL supports reduction schemes associated with the emission of en 'ne pollutants for commercial aircraft and concepts for reduction of 91 observables for military aircraft. These research endeavors relate to the goals of the NASA Strategic Enterprise in Aeronautics concerning the development of environmentally acceptable aircraft. It is in this precise area, where the US aircraft industry, academia, and Government are in great need of trained professionals and which is a high priority goal of the Minority University Research and Education (MLTREP) Program, that the HU/FM&AL can make its most important contribution.

  13. Dynamics and mechanism of UV-damaged DNA repair in indole-thymine dimer adduct: molecular origin of low repair quantum efficiency.

    PubMed

    Guo, Xunmin; Liu, Zheyun; Song, Qinhua; Wang, Lijuan; Zhong, Dongping

    2015-02-26

    Many biomimetic chemical systems for repair of UV-damaged DNA showed very low repair efficiency, and the molecular origin is still unknown. Here, we report our systematic characterization of the repair dynamics of a model compound of indole-thymine dimer adduct in three solvents with different polarity. By resolving all elementary steps including three electron-transfer processes and two bond-breaking and bond-formation dynamics with femtosecond resolution, we observed the slow electron injection in 580 ps in water, 4 ns in acetonitrile, and 1.38 ns in dioxane, the fast back electron transfer without repair in 120, 150, and 180 ps, and the slow bond splitting in 550 ps, 1.9 ns, and 4.5 ns, respectively. The dimer bond cleavage is clearly accelerated by the solvent polarity. By comparing with the biological repair machine photolyase with a slow back electron transfer (2.4 ns) and a fast bond cleavage (90 ps), the low repair efficiency in the biomimetic system is mainly determined by the fast back electron transfer and slow bond breakage. We also found that the model system exists in a dynamic heterogeneous C-clamped conformation, leading to a stretched dynamic behavior. In water, we even identified another stacked form with ultrafast cyclic electron transfer, significantly reducing the repair efficiency. Thus, the comparison of the repair efficiency in different solvents is complicated and should be cautious, and only the dynamics by resolving all elementary steps can finally determine the total repair efficiency. Finally, we use the Marcus electron-transfer theory to analyze all electron-transfer reactions and rationalize all observed electron-transfer dynamics.

  14. Regulation of suspended particulate matter (SPM) in Indian coal-based thermal power plants

    NASA Astrophysics Data System (ADS)

    Sengupta, Ishita

    Air borne particulate matter, in major Indian cities is at least three times the standard prescribed by the WHO. Coal-based thermal power plants are the major emitters of particulate matter in India. The lack of severe penalty for non-compliance with the standards has worsened the situation and thus calls for an immediate need for investment in technologies to regulate particulate emissions. My dissertation studies the optimal investment decisions in a dynamic framework, for a random sample of forty Indian coal-based power plants to abate particulate emissions. I used Linear Programming to solve the double cost minimization problem for the optimal choices of coal, boiler and pollution-control equipment. A policy analysis is done to choose over various tax policies, which would induce the firms to adopt the energy efficient as well as cost efficient technology. The aim here is to reach the WHO standards. Using the optimal switching point model I show that in a dynamic set up, switching the boiler immediately is always the cost effective option for all the power plants even if there is no policy restriction. The switch to a baghouse depends upon the policy in place. Theoretically, even though an emission tax is considered the most efficient tax, an ash tax or a coal tax can also be considered to be a good substitute especially in countries like India where monitoring costs are very high. As SPM is a local pollutant the analysis here is mainly firm specific.

  15. Solving multistage stochastic programming models of portfolio selection with outstanding liabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edirisinghe, C.

    1994-12-31

    Models for portfolio selection in the presence of an outstanding liability have received significant attention, for example, models for pricing options. The problem may be described briefly as follows: given a set of risky securities (and a riskless security such as a bond), and given a set of cash flows, i.e., outstanding liability, to be met at some future date, determine an initial portfolio and a dynamic trading strategy for the underlying securities such that the initial cost of the portfolio is within a prescribed wealth level and the expected cash surpluses arising from trading is maximized. While the tradingmore » strategy should be self-financing, there may also be other restrictions such as leverage and short-sale constraints. Usually the treatment is limited to binomial evolution of uncertainty (of stock price), with possible extensions for developing computational bounds for multinomial generalizations. Posing as stochastic programming models of decision making, we investigate alternative efficient solution procedures under continuous evolution of uncertainty, for discrete time economies. We point out an important moment problem arising in the portfolio selection problem, the solution (or bounds) on which provides the basis for developing efficient computational algorithms. While the underlying stochastic program may be computationally tedious even for a modest number of trading opportunities (i.e., time periods), the derived algorithms may used to solve problems whose sizes are beyond those considered within stochastic optimization.« less

  16. Transportation Energy Efficiency Program (TEEP) Report Abstracts

    DOT National Transportation Integrated Search

    1977-04-15

    This bibliography summarizes the published research accomplished for the Department of Transportation's Transportation Energy Efficiency Program and its predecessor, the Automotive Energy Efficiency Program. The reports are indexed by corporate autho...

  17. Learning reduced kinetic Monte Carlo models of complex chemistry from molecular dynamics.

    PubMed

    Yang, Qian; Sing-Long, Carlos A; Reed, Evan J

    2017-08-01

    We propose a novel statistical learning framework for automatically and efficiently building reduced kinetic Monte Carlo (KMC) models of large-scale elementary reaction networks from data generated by a single or few molecular dynamics simulations (MD). Existing approaches for identifying species and reactions from molecular dynamics typically use bond length and duration criteria, where bond duration is a fixed parameter motivated by an understanding of bond vibrational frequencies. In contrast, we show that for highly reactive systems, bond duration should be a model parameter that is chosen to maximize the predictive power of the resulting statistical model. We demonstrate our method on a high temperature, high pressure system of reacting liquid methane, and show that the learned KMC model is able to extrapolate more than an order of magnitude in time for key molecules. Additionally, our KMC model of elementary reactions enables us to isolate the most important set of reactions governing the behavior of key molecules found in the MD simulation. We develop a new data-driven algorithm to reduce the chemical reaction network which can be solved either as an integer program or efficiently using L1 regularization, and compare our results with simple count-based reduction. For our liquid methane system, we discover that rare reactions do not play a significant role in the system, and find that less than 7% of the approximately 2000 reactions observed from molecular dynamics are necessary to reproduce the molecular concentration over time of methane. The framework described in this work paves the way towards a genomic approach to studying complex chemical systems, where expensive MD simulation data can be reused to contribute to an increasingly large and accurate genome of elementary reactions and rates.

  18. Learning reduced kinetic Monte Carlo models of complex chemistry from molecular dynamics

    PubMed Central

    Sing-Long, Carlos A.

    2017-01-01

    We propose a novel statistical learning framework for automatically and efficiently building reduced kinetic Monte Carlo (KMC) models of large-scale elementary reaction networks from data generated by a single or few molecular dynamics simulations (MD). Existing approaches for identifying species and reactions from molecular dynamics typically use bond length and duration criteria, where bond duration is a fixed parameter motivated by an understanding of bond vibrational frequencies. In contrast, we show that for highly reactive systems, bond duration should be a model parameter that is chosen to maximize the predictive power of the resulting statistical model. We demonstrate our method on a high temperature, high pressure system of reacting liquid methane, and show that the learned KMC model is able to extrapolate more than an order of magnitude in time for key molecules. Additionally, our KMC model of elementary reactions enables us to isolate the most important set of reactions governing the behavior of key molecules found in the MD simulation. We develop a new data-driven algorithm to reduce the chemical reaction network which can be solved either as an integer program or efficiently using L1 regularization, and compare our results with simple count-based reduction. For our liquid methane system, we discover that rare reactions do not play a significant role in the system, and find that less than 7% of the approximately 2000 reactions observed from molecular dynamics are necessary to reproduce the molecular concentration over time of methane. The framework described in this work paves the way towards a genomic approach to studying complex chemical systems, where expensive MD simulation data can be reused to contribute to an increasingly large and accurate genome of elementary reactions and rates. PMID:28989618

  19. Learning reduced kinetic Monte Carlo models of complex chemistry from molecular dynamics

    DOE PAGES

    Yang, Qian; Sing-Long, Carlos A.; Reed, Evan J.

    2017-06-19

    Here, we propose a novel statistical learning framework for automatically and efficiently building reduced kinetic Monte Carlo (KMC) models of large-scale elementary reaction networks from data generated by a single or few molecular dynamics simulations (MD). Existing approaches for identifying species and reactions from molecular dynamics typically use bond length and duration criteria, where bond duration is a fixed parameter motivated by an understanding of bond vibrational frequencies. Conversely, we show that for highly reactive systems, bond duration should be a model parameter that is chosen to maximize the predictive power of the resulting statistical model. We demonstrate our methodmore » on a high temperature, high pressure system of reacting liquid methane, and show that the learned KMC model is able to extrapolate more than an order of magnitude in time for key molecules. Additionally, our KMC model of elementary reactions enables us to isolate the most important set of reactions governing the behavior of key molecules found in the MD simulation. We develop a new data-driven algorithm to reduce the chemical reaction network which can be solved either as an integer program or efficiently using L1 regularization, and compare our results with simple count-based reduction. For our liquid methane system, we discover that rare reactions do not play a significant role in the system, and find that less than 7% of the approximately 2000 reactions observed from molecular dynamics are necessary to reproduce the molecular concentration over time of methane. Furthermore, we describe a framework in this work that paves the way towards a genomic approach to studying complex chemical systems, where expensive MD simulation data can be reused to contribute to an increasingly large and accurate genome of elementary reactions and rates.« less

  20. Algorithm for predicting the evolution of series of dynamics of complex systems in solving information problems

    NASA Astrophysics Data System (ADS)

    Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.

    2018-03-01

    In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.

  1. An efficient formulation of robot arm dynamics for control and computer simulation

    NASA Astrophysics Data System (ADS)

    Lee, C. S. G.; Nigam, R.

    This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.

  2. A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty

    NASA Astrophysics Data System (ADS)

    Kewlani, Gaurav; Crawford, Justin; Iagnemma, Karl

    2012-05-01

    The ability of ground vehicles to quickly and accurately analyse their dynamic response to a given input is critical to their safety and efficient autonomous operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this uncertainty must be considered in the analysis of vehicle motion dynamics. Here, polynomial chaos approaches that explicitly consider parametric uncertainty during modelling of vehicle dynamics are presented. They are shown to be computationally more efficient than the standard Monte Carlo scheme, and experimental results compared with the simulation results performed on ANVEL (a vehicle simulator) indicate that the method can be utilised for efficient and accurate prediction of vehicle motion in realistic scenarios.

  3. Heterogeneous delivering capability promotes traffic efficiency in complex networks

    NASA Astrophysics Data System (ADS)

    Zhu, Yan-Bo; Guan, Xiang-Min; Zhang, Xue-Jun

    2015-12-01

    Traffic is one of the most fundamental dynamical processes in networked systems. With the homogeneous delivery capability of nodes, the global dynamic routing strategy proposed by Ling et al. [Phys. Rev. E81, 016113 (2010)] adequately uses the dynamic information during the process and thus it can reach a quite high network capacity. In this paper, based on the global dynamic routing strategy, we proposed a heterogeneous delivery allocation strategy of nodes on scale-free networks with consideration of nodes degree. It is found that the network capacity as well as some other indexes reflecting transportation efficiency are further improved. Our work may be useful for the design of more efficient routing strategies in communication or transportation systems.

  4. Efficiently approximating the Pareto frontier: Hydropower dam placement in the Amazon basin

    USGS Publications Warehouse

    Wu, Xiaojian; Gomes-Selman, Jonathan; Shi, Qinru; Xue, Yexiang; Garcia-Villacorta, Roosevelt; Anderson, Elizabeth; Sethi, Suresh; Steinschneider, Scott; Flecker, Alexander; Gomes, Carla P.

    2018-01-01

    Real–world problems are often not fully characterized by a single optimal solution, as they frequently involve multiple competing objectives; it is therefore important to identify the so-called Pareto frontier, which captures solution trade-offs. We propose a fully polynomial-time approximation scheme based on Dynamic Programming (DP) for computing a polynomially succinct curve that approximates the Pareto frontier to within an arbitrarily small > 0 on treestructured networks. Given a set of objectives, our approximation scheme runs in time polynomial in the size of the instance and 1/. We also propose a Mixed Integer Programming (MIP) scheme to approximate the Pareto frontier. The DP and MIP Pareto frontier approaches have complementary strengths and are surprisingly effective. We provide empirical results showing that our methods outperform other approaches in efficiency and accuracy. Our work is motivated by a problem in computational sustainability concerning the proliferation of hydropower dams throughout the Amazon basin. Our goal is to support decision-makers in evaluating impacted ecosystem services on the full scale of the Amazon basin. Our work is general and can be applied to approximate the Pareto frontier of a variety of multiobjective problems on tree-structured networks.

  5. Active model-based balancing strategy for self-reconfigurable batteries

    NASA Astrophysics Data System (ADS)

    Bouchhima, Nejmeddine; Schnierle, Marc; Schulte, Sascha; Birke, Kai Peter

    2016-08-01

    This paper describes a novel balancing strategy for self-reconfigurable batteries where the discharge and charge rates of each cell can be controlled. While much effort has been focused on improving the hardware architecture of self-reconfigurable batteries, energy equalization algorithms have not been systematically optimized in terms of maximizing the efficiency of the balancing system. Our approach includes aspects of such optimization theory. We develop a balancing strategy for optimal control of the discharge rate of battery cells. We first formulate the cell balancing as a nonlinear optimal control problem, which is modeled afterward as a network program. Using dynamic programming techniques and MATLAB's vectorization feature, we solve the optimal control problem by generating the optimal battery operation policy for a given drive cycle. The simulation results show that the proposed strategy efficiently balances the cells over the life of the battery, an obvious advantage that is absent in the other conventional approaches. Our algorithm is shown to be robust when tested against different influencing parameters varying over wide spectrum on different drive cycles. Furthermore, due to the little computation time and the proved low sensitivity to the inaccurate power predictions, our strategy can be integrated in a real-time system.

  6. Inlet Flow Control and Prediction Technologies for Embedded Propulsion Systems

    NASA Technical Reports Server (NTRS)

    McMillan, Michelle L.; Mackie, Scott A.; Gissen, Abe; Vukasinovic, Bojan; Lakebrink, Matthew T.; Glezer, Ari; Mani, Mori; Mace, James L.

    2011-01-01

    Fail-safe, hybrid, flow control (HFC) is a promising technology for meeting high-speed cruise efficiency, low-noise signature, and reduced fuel-burn goals for future, Hybrid-Wing-Body (HWB) aircraft with embedded engines. This report details the development of HFC technology that enables improved inlet performance in HWB vehicles with highly integrated inlets and embedded engines without adversely affecting vehicle performance. In addition, new test techniques for evaluating Boundary-Layer-Ingesting (BLI)-inlet flow-control technologies developed and demonstrated through this program are documented, including the ability to generate a BLI-like inlet-entrance flow in a direct-connect, wind-tunnel facility, as well as, the use of D-optimal, statistically designed experiments to optimize test efficiency and enable interpretation of results. Validated improvements in numerical analysis tools and methods accomplished through this program are also documented, including Reynolds-Averaged Navier-Stokes CFD simulations of steady-state flow physics for baseline, BLI-inlet diffuser flow, as well as, that created by flow-control devices. Finally, numerical methods were employed in a ground-breaking attempt to directly simulate dynamic distortion. The advances in inlet technologies and prediction tools will help to meet and exceed "N+2" project goals for future HWB aircraft.

  7. DEVELOPMENT OF AGENTS AND PROCEDURES FOR DECONTAMINATION OF THE YANKEE REACTOR PRIMARY COOLANT SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, R.M.

    1959-03-01

    Developments relative to decontamination achieved under the Yankee Reasearch and Development program are reported. The decontamination of a large test loop which had been used to conduct corrosion rate studies for the Yankee reactor program is described. The basic permanganate-citrate decontamination procedure suggested for application in Yankee reactor primary system cleanup was used. A study of the chemistry of this decontamination operation is presented, together with conclusions pertaining to the effectiveness of the solutions under the conditions studied. In an attempt to further improve the efficiency of the procedure, an additional series of static and dynamic tests was performcd usingmore » contaminated sections of stainless steel tubing from the original SlW steam generator. Survival variables in the process (reagent composition, contact time, temperature, and flow velocity) were studied. The changes in decontamination efficiency produced by these variations are discussed and compared with results obtained throughthe use of similar procedures. Based on the observations made, conclusions are drawn concerning the optimum conditions for this cleanup process, a new set of suggested basic permanganate-citrate decontamination instructions is presented, and recommendations are made concerning future studies involving this procedure. (auth)« less

  8. Auctions with Dynamic Populations: Efficiency and Revenue Maximization

    NASA Astrophysics Data System (ADS)

    Said, Maher

    We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.

  9. Assessing program efficiency: a time and motion study of the Mental Health Emergency Care - Rural Access Program in NSW Australia.

    PubMed

    Saurman, Emily; Lyle, David; Kirby, Sue; Roberts, Russell

    2014-07-31

    The Mental Health Emergency Care-Rural Access Program (MHEC-RAP) is a telehealth solution providing specialist emergency mental health care to rural and remote communities across western NSW, Australia. This is the first time and motion (T&M) study to examine program efficiency and capacity for a telepsychiatry program. Clinical services are an integral aspect of the program accounting for 6% of all activities and 50% of the time spent conducting program activities, but half of this time is spent completing clinical paperwork. This finding emphasizes the importance of these services to program efficiency and the need to address variability of service provision to impact capacity. Currently, there is no efficiency benchmark for emergency telepsychiatry programs. Findings suggest that MHEC-RAP could increase its activity without affecting program responsiveness. T&M studies not only determine activity and time expenditure, but have a wider application assessing program efficiency by understanding, defining, and calculating capacity. T&M studies can inform future program development of MHEC-RAP and similar telehealth programs, both in Australia and overseas.

  10. Impact Properties of Metal Fan Containment Materials Being Evaluated for the High-Speed Civil Transport (HSCT)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Under the Enabling Propulsion Materials (EPM) program - a partnership between NASA, Pratt & Whitney, and GE Aircraft Engines - the Materials and Structures Divisions of the NASA Lewis Research Center are involved in developing a fan-containment system for the High-Speed Civil Transport (HSCT). The program calls for a baseline system to be designed by the end of 1995, with subsequent testing of innovative concepts. Five metal candidate materials are currently being evaluated for the baseline system in the Structures Division's Ballistic Impact Facility. This facility was developed to provide the EPM program with cost-efficient and timely impact test data. At the facility, material specimens are impacted at speeds up to 350 m/sec by projectiles of various sizes and shapes to assess the specimens' ability to absorb energy and withstand impact. The tests can be conducted at either room or elevated temperatures. Posttest metallographic analysis is conducted to improve understanding of the failure modes. A dynamic finite element program is used to simulate the events and both guide the testing as well as aid in designing the fan-containment system.

  11. 13 CFR 101.500 - Small Business Energy Efficiency Program.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Small Business Energy Efficiency... ADMINISTRATION Small Business Energy Efficiency § 101.500 Small Business Energy Efficiency Program. (a) The.../energy, building on the Energy Star for Small Business Program, to assist small business concerns in...

  12. 13 CFR 101.500 - Small Business Energy Efficiency Program.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Small Business Energy Efficiency... ADMINISTRATION Small Business Energy Efficiency § 101.500 Small Business Energy Efficiency Program. (a) The.../energy, building on the Energy Star for Small Business Program, to assist small business concerns in...

  13. 13 CFR 101.500 - Small Business Energy Efficiency Program.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Small Business Energy Efficiency... ADMINISTRATION Small Business Energy Efficiency § 101.500 Small Business Energy Efficiency Program. (a) The.../energy, building on the Energy Star for Small Business Program, to assist small business concerns in...

  14. 13 CFR 101.500 - Small Business Energy Efficiency Program.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Small Business Energy Efficiency... ADMINISTRATION Small Business Energy Efficiency § 101.500 Small Business Energy Efficiency Program. (a) The.../energy, building on the Energy Star for Small Business Program, to assist small business concerns in...

  15. 13 CFR 101.500 - Small Business Energy Efficiency Program.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Small Business Energy Efficiency... ADMINISTRATION Small Business Energy Efficiency § 101.500 Small Business Energy Efficiency Program. (a) The.../energy, building on the Energy Star for Small Business Program, to assist small business concerns in...

  16. Model Energy Efficiency Program Impact Evaluation Guide

    EPA Pesticide Factsheets

    Find guidance on model approaches for calculating energy, demand, and emissions savings resulting from energy efficiency programs. It describes several standard approaches that can be used in order to make these programs more efficient.

  17. DROP: Detecting Return-Oriented Programming Malicious Code

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Xiao, Hai; Shen, Xiaobin; Yin, Xinchun; Mao, Bing; Xie, Li

    Return-Oriented Programming (ROP) is a new technique that helps the attacker construct malicious code mounted on x86/SPARC executables without any function call at all. Such technique makes the ROP malicious code contain no instruction, which is different from existing attacks. Moreover, it hides the malicious code in benign code. Thus, it circumvents the approaches that prevent control flow diversion outside legitimate regions (such as W ⊕ X ) and most malicious code scanning techniques (such as anti-virus scanners). However, ROP has its own intrinsic feature which is different from normal program design: (1) uses short instruction sequence ending in "ret", which is called gadget, and (2) executes the gadgets contiguously in specific memory space, such as standard GNU libc. Based on the features of the ROP malicious code, in this paper, we present a tool DROP, which is focused on dynamically detecting ROP malicious code. Preliminary experimental results show that DROP can efficiently detect ROP malicious code, and have no false positives and negatives.

  18. Modelling parallel programs and multiprocessor architectures with AXE

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.

    1991-01-01

    AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.

  19. The MHOST finite element program: 3-D inelastic analysis methods for hot section components. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Nakazawa, Shohei

    1991-01-01

    Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.

  20. Space storable propellant performance program coaxial injector characterization

    NASA Technical Reports Server (NTRS)

    Burick, R. J.

    1972-01-01

    An experimental program was conducted to characterize the circular coaxial injector concept for application with the space-storable gas/liquid propellant combination FLOX(82.6% F2)/CH4(g) at high pressure. The primary goal of the program was to obtain high characteristic velocity efficiency in conjunction with acceptable injector/chamber compatibility. A series of subscale (single element) cold flow and hot fire experiments was employed to establish design criteria for a 3000-lbf (sea level) engine operating at 500 psia. The subscale experiments characterized both high performance core elements and peripheral elements with enhanced injector/chamber compatibility. The full-scale injector which evolved from the study demonstrated a performance level of 99 percent of the theoretical shifting characteristic exhaust velocity with low chamber heat flux levels. A 44-second-duration firing demonstrated the durability of the injector. Parametric data are presented that are applicable for the design of circular, coaxial injectors that operate with injection dynamics (fuel and oxidizer velocity, etc.) similar to those employed in the work reported.

  1. A divide-and-conquer approach to determine the Pareto frontier for optimization of protein engineering experiments.

    PubMed

    He, Lu; Friedman, Alan M; Bailey-Kellogg, Chris

    2012-03-01

    In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability versus novelty, affinity versus specificity, activity versus immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not "dominated"; that is, no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, Protein Engineering Pareto FRontier (PEPFR), that hierarchically subdivides the objective space, using appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. Copyright © 2011 Wiley Periodicals, Inc.

  2. Optimal control of epidemic information dissemination over networks.

    PubMed

    Chen, Pin-Yu; Cheng, Shin-Ming; Chen, Kwang-Cheng

    2014-12-01

    Information dissemination control is of crucial importance to facilitate reliable and efficient data delivery, especially in networks consisting of time-varying links or heterogeneous links. Since the abstraction of information dissemination much resembles the spread of epidemics, epidemic models are utilized to characterize the collective dynamics of information dissemination over networks. From a systematic point of view, we aim to explore the optimal control policy for information dissemination given that the control capability is a function of its distribution time, which is a more realistic model in many applications. The main contributions of this paper are to provide an analytically tractable model for information dissemination over networks, to solve the optimal control signal distribution time for minimizing the accumulated network cost via dynamic programming, and to establish a parametric plug-in model for information dissemination control. In particular, we evaluate its performance in mobile and generalized social networks as typical examples.

  3. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  4. A vibration-based health monitoring program for a large and seismically vulnerable masonry dome

    NASA Astrophysics Data System (ADS)

    Pecorelli, M. L.; Ceravolo, R.; De Lucia, G.; Epicoco, R.

    2017-05-01

    Vibration-based health monitoring of monumental structures must rely on efficient and, as far as possible, automatic modal analysis procedures. Relatively low excitation energy provided by traffic, wind and other sources is usually sufficient to detect structural changes, as those produced by earthquakes and extreme events. Above all, in-operation modal analysis is a non-invasive diagnostic technique that can support optimal strategies for the preservation of architectural heritage, especially if complemented by model-driven procedures. In this paper, the preliminary steps towards a fully automated vibration-based monitoring of the world’s largest masonry oval dome (internal axes of 37.23 by 24.89 m) are presented. More specifically, the paper reports on signal treatment operations conducted to set up the permanent dynamic monitoring system of the dome and to realise a robust automatic identification procedure. Preliminary considerations on the effects of temperature on dynamic parameters are finally reported.

  5. Dynamics of attentional deployment during saccadic programming.

    PubMed

    Castet, Eric; Jeanjean, Sébastien; Montagnini, Anna; Laugier, Danièle; Masson, Guillaume S

    2006-03-03

    The dynamics of attentional deployment before saccade execution was studied with a dual-task paradigm. Observers made a horizontal saccade whose direction was indicated by a symbolic precue and had to discriminate the orientation of a Gabor patch displayed at different delays after the precue (but before saccade onset). The patch location relative to the saccadic target was indicated to observers before each block. Therefore, on each trial, observers were informed simultaneously about the respective absolute locations of the saccadic and perceptual targets. The main result is that orientational acuity improved over a period of 150-200 ms after the precue onset at the saccadic target location, where overall performance is best, and at distant locations. This effect is due to attentional factors rather than to an alerting effect. It is also dependent on the efficiency of the temporal masks displayed before and after the Gabor patches.

  6. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE PAGES

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...

    2018-02-14

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  7. Dislocation dynamics in non-convex domains using finite elements with embedded discontinuities

    NASA Astrophysics Data System (ADS)

    Romero, Ignacio; Segurado, Javier; LLorca, Javier

    2008-04-01

    The standard strategy developed by Van der Giessen and Needleman (1995 Modelling Simul. Mater. Sci. Eng. 3 689) to simulate dislocation dynamics in two-dimensional finite domains was modified to account for the effect of dislocations leaving the crystal through a free surface in the case of arbitrary non-convex domains. The new approach incorporates the displacement jumps across the slip segments of the dislocations that have exited the crystal within the finite element analysis carried out to compute the image stresses on the dislocations due to the finite boundaries. This is done in a simple computationally efficient way by embedding the discontinuities in the finite element solution, a strategy often used in the numerical simulation of crack propagation in solids. Two academic examples are presented to validate and demonstrate the extended model and its implementation within a finite element program is detailed in the appendix.

  8. Singlet-to-triplet intermediates and triplet exciton dynamics in pentacene thinfilms

    NASA Astrophysics Data System (ADS)

    Thorsmolle, Verner; Korber, Michael; Obergfell, Emanuel; Kuhlman, Thomas; Campbell, Ian; Crone, Brian; Taylor, Antoinette; Averitt, Richard; Demsar, Jure

    Singlet-to-triplet fission in organic semiconductors is a spin-conserving multiexciton process in which one spin-zero singlet excitation is converted into two spin-one triplet excitations on an ultrafast timescale. Current scientific interest into this carrier multiplication process is largely driven by prospects of enhancing the efficiency in photovoltaic applications by generating two long-lived triplet excitons by one photon. The fission process is known to involve intermediate states, known as correlated triplet pairs, with an overall singlet character, before being interchanged into uncorrelated triplets. Here we use broadband femtosecond real-time spectroscopy to study the excited state dynamics in pentacene thin films, elucidating the fission process and the role of intermediate triplet states. VKT and AJT acknowledge support by the LDRD program at Los Alamos National Laboratory and the Department of Energy, Grant No. DE-FG02-04ER118. MK, MO and JD acknowledge support by the Alexander von Humboldt Foundation.

  9. Optomechanical study and optimization of cantilever plate dynamics

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1995-06-01

    Optimum dynamic characteristics of an aluminum cantilever plate containing holes of different sizes and located at arbitrary positions on the plate are studied computationally and experimentally. The objective function of this optimization is the minimization/maximization of the natural frequencies of the plate in terms of such design variable s as the sizes and locations of the holes. The optimization process is performed using the finite element method and mathematical programming techniques in order to obtain the natural frequencies and the optimum conditions of the plate, respectively. The modal behavior of the resultant optimal plate layout is studied experimentally through the use of holographic interferometry techniques. Comparisons of the computational and experimental results show that good agreement between theory and test is obtained. The comparisons also show that the combined, or hybrid use of experimental and computational techniques complement each other and prove to be a very efficient tool for performing optimization studies of mechanical components.

  10. Portable Parallel Programming for the Dynamic Load Balancing of Unstructured Grid Applications

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Das, Sajal K.; Harvey, Daniel; Oliker, Leonid

    1999-01-01

    The ability to dynamically adapt an unstructured -rid (or mesh) is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult, particularly from the view point of portability on various multiprocessor platforms We address this problem by developing PLUM, tin automatic anti architecture-independent framework for adaptive numerical computations in a message-passing environment. Portability is demonstrated by comparing performance on an SP2, an Origin2000, and a T3E, without any code modifications. We also present a general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication pattern, with a goal to providing a global view of system loads across processors. Experiments on, an SP2 and an Origin2000 demonstrate the portability of our approach which achieves superb load balance at the cost of minimal extra overhead.

  11. A new procedure for dynamic adaption of three-dimensional unstructured grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger

    1993-01-01

    A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.

  12. [The investigation into dynamics of depression level and the quality of life in the patients after myocardial infarction under the influence of the program of physical rehabilitation].

    PubMed

    Belikova, N A; Indyka, S Ya

    2016-01-01

    The evaluation of the psychological condition of the patients who survived myocardial infarction and its correction taking into consideration the peculiar features of the individual reaction to the disease are the indispensable components of physical rehabilitation. The present article was designed to report the results of the study on the influence of the authors' physical rehabilitation program on the prevalence of depression and the life quality characteristics of the patients treated after myocardial infarction during the follow-up period. The patients of the main group (n=30) were enrolled in the original physical rehabilitation program. Those comprising the group of comparison (n=30) were given a course of rehabilitation in accordance with the scheme that had been recommended by the leading scientists and generally accepted in the Ukraine for the patients recovering after myocardial infarction under conditions of the out-patient clinics, spa and health resort facilities or convalescent centers. The study has demonstrated that the patients of both groups exhibited positive dynamics of their clinical condition (e.g. the decrease in the number of depressed subjects); however, this tendency was more pronounced in the main group where the number of the patients experiencing depression decreased by 61% at the end of the observation period (р<0,05). The analysis of the causes of anxiety associated with this pathology in the individual patients has demonstrated that the main factors responsible for the deterioration of the quality of life were the necessity of treatment, the limitations on the everyday physical activity, and the feeling of emotional tension. Moreover, the positive dynamics of the characteristics being evaluated was documented in the patients of the main group which gives reason to conclude that the program of physical rehabilitation proposed by the authors for the treatment of the patients after myocardial infarction is highly efficient during the follow-up period. Suffice it to say that 23 (76,7%) patients of the main group did not consider their lives as of poor quality by the end of the study period (р<0,01). There were only 18 such patients in the control group (р<0,05). The results of the present study provide a basis for recommending the proposed authors' program of physical rehabilitation for the patients treated after myocardial infarction with the emphasis on the necessity to do special dynamic exercises for the cervical and thoraco-cervical spine segments to be supplemented by the relevant educational program.

  13. Interface COMSOL-PHREEQC (iCP), an efficient numerical framework for the solution of coupled multiphysics and geochemistry

    NASA Astrophysics Data System (ADS)

    Nardi, Albert; Idiart, Andrés; Trinchero, Paolo; de Vries, Luis Manuel; Molinero, Jorge

    2014-08-01

    This paper presents the development, verification and application of an efficient interface, denoted as iCP, which couples two standalone simulation programs: the general purpose Finite Element framework COMSOL Multiphysics® and the geochemical simulator PHREEQC. The main goal of the interface is to maximize the synergies between the aforementioned codes, providing a numerical platform that can efficiently simulate a wide number of multiphysics problems coupled with geochemistry. iCP is written in Java and uses the IPhreeqc C++ dynamic library and the COMSOL Java-API. Given the large computational requirements of the aforementioned coupled models, special emphasis has been placed on numerical robustness and efficiency. To this end, the geochemical reactions are solved in parallel by balancing the computational load over multiple threads. First, a benchmark exercise is used to test the reliability of iCP regarding flow and reactive transport. Then, a large scale thermo-hydro-chemical (THC) problem is solved to show the code capabilities. The results of the verification exercise are successfully compared with those obtained using PHREEQC and the application case demonstrates the scalability of a large scale model, at least up to 32 threads.

  14. Ecological Research Division Theoretical Ecology Program. [Contains abstracts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-10-01

    This report presents the goals of the Theoretical Ecology Program and abstracts of research in progress. Abstracts cover both theoretical research that began as part of the terrestrial ecology core program and new projects funded by the theoretical program begun in 1988. Projects have been clustered into four major categories: Ecosystem dynamics; landscape/scaling dynamics; population dynamics; and experiment/sample design.

  15. Dynamic switching enables efficient bacterial colonization in flow.

    PubMed

    Kannan, Anerudh; Yang, Zhenbin; Kim, Minyoung Kevin; Stone, Howard A; Siryaporn, Albert

    2018-05-22

    Bacteria colonize environments that contain networks of moving fluids, including digestive pathways, blood vasculature in animals, and the xylem and phloem networks in plants. In these flow networks, bacteria form distinct biofilm structures that have an important role in pathogenesis. The physical mechanisms that determine the spatial organization of bacteria in flow are not understood. Here, we show that the bacterium P. aeruginosa colonizes flow networks using a cyclical process that consists of surface attachment, upstream movement, detachment, movement with the bulk flow, and surface reattachment. This process, which we have termed dynamic switching, distributes bacterial subpopulations upstream and downstream in flow through two phases: movement on surfaces and cellular movement via the bulk. The model equations that describe dynamic switching are identical to those that describe dynamic instability, a process that enables microtubules in eukaryotic cells to search space efficiently to capture chromosomes. Our results show that dynamic switching enables bacteria to explore flow networks efficiently, which maximizes dispersal and colonization and establishes the organizational structure of biofilms. A number of eukaryotic and mammalian cells also exhibit movement in two phases in flow, which suggests that dynamic switching is a modality that enables efficient dispersal for a broad range of cell types.

  16. Effect of Initial Microstructure on the Microstructural Evolution and Joint Efficiency of a WE43 Alloy During Friction Stir Welding

    DTIC Science & Technology

    2013-04-01

    to maximize joint efficiency. 15. SUBJECT TERMS friction stir welding, strain rate, dynamic recrystallization , joint efficiency, stir zone (SZ...stir welding, Strain rate, Dynamic recrystallization , Joint efficiency, Stir Zone (SZ) Abstract The initial microstructure plays an important role in... eutectic Mg17Al12 phase. Park et al. [7] demonstrated the importance of texture and related it to the mechanical properties of an AZ61 alloy

  17. Computer program system for dynamic simulation and stability analysis of passive and actively controlled spacecraft. Volume 1. Theory

    NASA Technical Reports Server (NTRS)

    Bodley, C. S.; Devers, D. A.; Park, C. A.

    1975-01-01

    A theoretical development and associated digital computer program system is presented. The dynamic system (spacecraft) is modeled as an assembly of rigid and/or flexible bodies not necessarily in a topological tree configuration. The computer program system may be used to investigate total system dynamic characteristics including interaction effects between rigid and/or flexible bodies, control systems, and a wide range of environmental loadings. Additionally, the program system may be used for design of attitude control systems and for evaluation of total dynamic system performance including time domain response and frequency domain stability analyses. Volume 1 presents the theoretical developments including a description of the physical system, the equations of dynamic equilibrium, discussion of kinematics and system topology, a complete treatment of momentum wheel coupling, and a discussion of gravity gradient and environmental effects. Volume 2, is a program users' guide and includes a description of the overall digital program code, individual subroutines and a description of required program input and generated program output. Volume 3 presents the results of selected demonstration problems that illustrate all program system capabilities.

  18. Three-dimensional interactive Molecular Dynamics program for the study of defect dynamics in crystals

    NASA Astrophysics Data System (ADS)

    Patriarca, M.; Kuronen, A.; Robles, M.; Kaski, K.

    2007-01-01

    The study of crystal defects and the complex processes underlying their formation and time evolution has motivated the development of the program ALINE for interactive molecular dynamics experiments. This program couples a molecular dynamics code to a Graphical User Interface and runs on a UNIX-X11 Window System platform with the MOTIF library, which is contained in many standard Linux releases. ALINE is written in C, thus giving the user the possibility to modify the source code, and, at the same time, provides an effective and user-friendly framework for numerical experiments, in which the main parameters can be interactively varied and the system visualized in various ways. We illustrate the main features of the program through some examples of detection and dynamical tracking of point-defects, linear defects, and planar defects, such as stacking faults in lattice-mismatched heterostructures. Program summaryTitle of program:ALINE Catalogue identifier:ADYJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYJ_v1_0 Program obtainable from: CPC Program Library, Queen University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Computers:DEC ALPHA 300, Intel i386 compatible computers, G4 Apple Computers Installations:Laboratory of Computational Engineering, Helsinki University of Technology, Helsinki, Finland Operating systems under which the program has been tested:True64 UNIX, Linux-i386, Mac OS X 10.3 and 10.4 Programming language used:Standard C and MOTIF libraries Memory required to execute with typical data:6 Mbytes but may be larger depending on the system size No. of lines in distributed program, including test data, etc.:16 901 No. of bytes in distributed program, including test data, etc.:449 559 Distribution format:tar.gz Nature of physical problem:Some phenomena involving defects take place inside three-dimensional crystals at times which can be hardly predicted. For this reason they are difficult to detect and track even within numerical experiments, especially when one is interested in studying their dynamical properties and time evolution. Furthermore, traditional simulation methods require the storage of a huge amount of data which in turn may imply a long work for their analysis. Method of solution:Simplifications of the simulation work described above strongly depend also on the computer performance. It has now become possible to realize some of such simplifications thanks to the real possibility of using interactive programs. The solution proposed here is based on the development of an interactive graphical simulation program both for avoiding large storage of data and the subsequent elaboration and analysis as well as for visualizing and tracking many phenomena inside three-dimensional samples. However, the full computational power of traditional simulation programs may not be available in general in programs with graphical user interfaces, due to their interactive nature. Nevertheless interactive programs can still be very useful for detecting processes difficult to visualize, restricting the range or making a fine tuning of the parameters, and tailoring the faster programs toward precise targets. Restrictions on the complexity of the problem:The restrictions on the applicability of the program are related to the computer resources available. The graphical interface and interactivity demand computational resources that depend on the particular numerical simulation to be performed. To preserve a balance between speed and resources, the choice of the number of atoms to be simulated is critical. With an average current computer, simulations of systems with more than 10 5 atoms may not be easily feasible on an interactive scheme. Another restriction is related to the fact that the program was originally designed to simulate systems in the solid phase, so that problems in the simulation may occur if some particular physical quantities are computed beyond the melting point. Typical running time:It depends on the machine architecture, system size, and user needs. Unusual features of the program:In the program, besides the window in which the system is represented in real space, an additional graphical window presenting the real time distribution histogram for different physical variables (such as kinetic or potential energy) is included. Such tool is very interesting for making demonstrative numerical experiments for teaching purposes as well as for research, e.g., for detecting and tracking crystal defects. The program includes: an initial condition builder, an interactive display of the simulation, a set of tools which allow the user to filter through different physical quantities the information—either displayed in real time or printed in the output files—and to perform an efficient search of the interesting regions of parameter space.

  19. A Specification for a Godunov-type Eulerian 2-D Hydrocode, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nystrom, William D; Robey, Jonathan M

    2012-05-01

    The purpose of this code specification is to describe an algorithm for solving the Euler equations of hydrodynamics in a 2D rectangular region in sufficient detail to allow a software developer to produce an implementation on their target platform using their programming language of choice without requiring detailed knowledge and experience in the field of computational fluid dynamics. It should be possible for a software developer who is proficient in the programming language of choice and is knowledgable of the target hardware to produce an efficient implementation of this specification if they also possess a thorough working knowledge of parallelmore » programming and have some experience in scientific programming using fields and meshes. On modern architectures, it will be important to focus on issues related to the exploitation of the fine grain parallelism and data locality present in this algorithm. This specification aims to make that task easier by presenting the essential details of the algorithm in a systematic and language neutral manner while also avoiding the inclusion of implementation details that would likely be specific to a particular type of programming paradigm or platform architecture.« less

  20. Charge Generation Dynamics in Efficient All-Polymer Solar Cells: Influence of Polymer Packing and Morphology.

    PubMed

    Gautam, Bhoj R; Lee, Changyeon; Younts, Robert; Lee, Wonho; Danilov, Evgeny; Kim, Bumjoon J; Gundogdu, Kenan

    2015-12-23

    All-polymer solar cells exhibit rapid progress in power conversion efficiency (PCE) from 2 to 7.7% over the past few years. While this improvement is primarily attributed to efficient charge transport and balanced mobility between the carriers, not much is known about the charge generation dynamics in these systems. Here we measured exciton relaxation and charge separation dynamics using ultrafast spectroscopy in polymer/polymer blends with different molecular packing and morphology. These measurements indicate that preferential face-on configuration with intermixed nanomorphology increases the charge generation efficiency. In fact, there is a direct quantitative correlation between the free charge population in the ultrafast time scales and the external quantum efficiency, suggesting not only the transport but also charge generation is key for the design of high performance all polymer solar cells.

  1. Coupled information diffusion--pest dynamics models predict delayed benefits of farmer cooperation in pest management programs.

    PubMed

    Rebaudo, François; Dangles, Olivier

    2011-10-01

    Worldwide, the theory and practice of agricultural extension system have been dominated for almost half a century by Rogers' "diffusion of innovation theory". In particular, the success of integrated pest management (IPM) extension programs depends on the effectiveness of IPM information diffusion from trained farmers to other farmers, an important assumption which underpins funding from development organizations. Here we developed an innovative approach through an agent-based model (ABM) combining social (diffusion theory) and biological (pest population dynamics) models to study the role of cooperation among small-scale farmers to share IPM information for controlling an invasive pest. The model was implemented with field data, including learning processes and control efficiency, from large scale surveys in the Ecuadorian Andes. Our results predict that although cooperation had short-term costs for individual farmers, it paid in the long run as it decreased pest infestation at the community scale. However, the slow learning process placed restrictions on the knowledge that could be generated within farmer communities over time, giving rise to natural lags in IPM diffusion and applications. We further showed that if individuals learn from others about the benefits of early prevention of new pests, then educational effort may have a sustainable long-run impact. Consistent with models of information diffusion theory, our results demonstrate how an integrated approach combining ecological and social systems would help better predict the success of IPM programs. This approach has potential beyond pest management as it could be applied to any resource management program seeking to spread innovations across populations.

  2. MIROS: A Hybrid Real-Time Energy-Efficient Operating System for the Resource-Constrained Wireless Sensor Nodes

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; Gholami, Khalid El

    2014-01-01

    Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant. PMID:25248069

  3. MIROS: a hybrid real-time energy-efficient operating system for the resource-constrained wireless sensor nodes.

    PubMed

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; El Gholami, Khalid

    2014-09-22

    Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant.

  4. The Cost of Saving Electricity Through Energy Efficiency Programs Funded by Utility Customers: 2009–2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Ian M.; Goldman, Charles A.; Murphy, Sean

    The average cost to utilities to save a kilowatt-hour (kWh) in the United States is 2.5 cents, according to the most comprehensive assessment to date of the cost performance of energy efficiency programs funded by electricity customers. These costs are similar to those documented earlier. Cost-effective efficiency programs help ensure electricity system reliability at the most affordable cost as part of utility planning and implementation activities for resource adequacy. Building on prior studies, Berkeley Lab analyzed the cost performance of 8,790 electricity efficiency programs between 2009 and 2015 for 116 investor-owned utilities and other program administrators in 41 states. Themore » Berkeley Lab database includes programs representing about three-quarters of total spending on electricity efficiency programs in the United States.« less

  5. Linked-List-Based Multibody Dynamics (MBDyn) Engine

    NASA Technical Reports Server (NTRS)

    MacLean, John; Brain, Thomas; Wuiocho, Leslie; Huynh, An; Ghosh, Tushar

    2012-01-01

    This new release of MBDyn is a software engine that calculates the dynamics states of kinematic, rigid, or flexible multibody systems. An MBDyn multibody system may consist of multiple groups of articulated chains, trees, or closed-loop topologies. Transient topologies are handled through conservation of energy and momentum. The solution for rigid-body systems is exact, and several configurable levels of nonlinear term fidelity are available for flexible dynamics systems. The algorithms have been optimized for efficiency and can be used for both non-real-time (NRT) and real-time (RT) simulations. Interfaces are currently compatible with NASA's Trick Simulation Environment. This new release represents a significant advance in capability and ease of use. The two most significant new additions are an application programming interface (API) that clarifies and simplifies use of MBDyn, and a link-list infrastructure that allows a single MBDyn instance to propagate an arbitrary number of interacting groups of multibody top ologies. MBDyn calculates state and state derivative vectors for integration using an external integration routine. A Trickcompatible interface is provided for initialization, data logging, integration, and input/output.

  6. Dynamic Characteristics of a Simple Brayton Cryocycle

    NASA Astrophysics Data System (ADS)

    Kutzschbach, A.; Kauschke, M.; Haberstroh, Ch.; Quack, H.

    2006-04-01

    The goal of the overall program is to develop a dynamic numerical model of helium refrigerators and the associated cooling systems based on commercial simulation software. The aim is to give system designers a tool to search for optimum control strategies during the construction phase of the refrigerator with the help of a plant "simulator". In a first step, a simple Brayton refrigerator has been investigated, which consists of a compressor, an after-cooler, a counter-current heat exchanger, a turboexpander and a heat source. Operating modes are "refrigeration" and "liquefaction". Whereas for the steady state design only component efficiencies are needed and mass and energy balances have to be calculated, for the dynamic calculation one needs also the thermal masses and the helium inventory. Transient mass and energy balances have to be formulated for many small elements and then solved simultaneously for all elements. Starting point of the simulation of the Brayton cycle is the steady state operation at design conditions. The response of the system to step and cyclic changes of the refrigeration or liquefaction rate are calculated and characterized.

  7. Efficient Operation of a Multi-purpose Reservoir in Chile: Integration of Economic Water Value for Irrigation and Hydropower

    NASA Astrophysics Data System (ADS)

    Olivares, M. A.; Gonzalez Cabrera, J. M., Sr.; Moreno, R.

    2016-12-01

    Operation of hydropower reservoirs in Chile is prescribed by an Independent Power System Operator. This study proposes a methodology that integrates power grid operations planning with basin-scale multi-use reservoir operations planning. The aim is to efficiently manage a multi-purpose reservoir, in which hydroelectric generation is competing with other water uses, most notably irrigation. Hydropower and irrigation are competing water uses due to a seasonality mismatch. Currently, the operation of multi-purpose reservoirs with substantial power capacity is prescribed as the result of a grid-wide cost-minimization model which takes irrigation requirements as constraints. We propose advancing in the economic co-optimization of reservoir water use for irrigation and hydropower at the basin level, by explicitly introducing the economic value of water for irrigation represented by a demand function for irrigation water. The proposed methodology uses the solution of a long-term grid-wide operations planning model, a stochastic dual dynamic program (SDDP), to obtain the marginal benefit function for water use in hydropower. This marginal benefit corresponds to the energy price in the power grid as a function of the water availability in the reservoir and the hydrologic scenarios. This function allows capture technical and economic aspects to the operation of hydropower reservoir in the power grid and is generated with the dual variable of the power-balance constraint, the optimal reservoir operation and the hydrologic scenarios used in SDDP. The economic value of water for irrigation and hydropower are then integrated into a basin scale stochastic dynamic program, from which stored water value functions are derived. These value functions are then used to re-optimize reservoir operations under several inflow scenarios.

  8. Mixed Integer Programming and Heuristic Scheduling for Space Communication

    NASA Technical Reports Server (NTRS)

    Lee, Charles H.; Cheung, Kar-Ming

    2013-01-01

    Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.

  9. Evaluating Thermodynamic Integration Performance of the New Amber Molecular Dynamics Package and Assess Potential Halogen Bonds of Enoyl-ACP Reductase (FabI) Benzimidazole Inhibitors

    PubMed Central

    Su, Pin-Chih; Johnson, Michael E.

    2015-01-01

    Thermodynamic integration (TI) can provide accurate binding free energy insights in a lead optimization program, but its high computational expense has limited its usage. In the effort of developing an efficient and accurate TI protocol for FabI inhibitors lead optimization program, we carefully compared TI with different Amber molecular dynamics (MD) engines (sander and pmemd), MD simulation lengths, the number of intermediate states and transformation steps, and the Lennard-Jones and Coulomb Softcore potentials parameters in the one-step TI, using eleven benzimidazole inhibitors in complex with Francisella tularensis enoyl acyl reductase (FtFabI). To our knowledge, this is the first study to extensively test the new AMBER MD engine, pmemd, on TI and compare the parameters of the Softcore potentials in the one-step TI in a protein-ligand binding system. The best performing model, the one-step pmemd TI, using 6 intermediate states and 1 ns MD simulations, provides better agreement with experimental results (RMSD = 0.52 kcal/mol) than the best performing implicit solvent method, QM/MM-GBSA from our previous study (RMSD = 3.00 kcal/mol), while maintaining similar efficiency. Briefly, we show the optimized TI protocol to be highly accurate and affordable for the FtFabI system. This approach can be implemented in a larger scale benzimidazole scaffold lead optimization against FtFabI. Lastly, the TI results here also provide structure-activity relationship insights, and suggest the para-halogen in benzimidazole compounds might form a weak halogen bond with FabI, which is a well-known halogen bond favoring enzyme. PMID:26666582

  10. Evaluating thermodynamic integration performance of the new amber molecular dynamics package and assess potential halogen bonds of enoyl-ACP reductase (FabI) benzimidazole inhibitors.

    PubMed

    Su, Pin-Chih; Johnson, Michael E

    2016-04-05

    Thermodynamic integration (TI) can provide accurate binding free energy insights in a lead optimization program, but its high computational expense has limited its usage. In the effort of developing an efficient and accurate TI protocol for FabI inhibitors lead optimization program, we carefully compared TI with different Amber molecular dynamics (MD) engines (sander and pmemd), MD simulation lengths, the number of intermediate states and transformation steps, and the Lennard-Jones and Coulomb Softcore potentials parameters in the one-step TI, using eleven benzimidazole inhibitors in complex with Francisella tularensis enoyl acyl reductase (FtFabI). To our knowledge, this is the first study to extensively test the new AMBER MD engine, pmemd, on TI and compare the parameters of the Softcore potentials in the one-step TI in a protein-ligand binding system. The best performing model, the one-step pmemd TI, using 6 intermediate states and 1 ns MD simulations, provides better agreement with experimental results (RMSD = 0.52 kcal/mol) than the best performing implicit solvent method, QM/MM-GBSA from our previous study (RMSD = 3.00 kcal/mol), while maintaining similar efficiency. Briefly, we show the optimized TI protocol to be highly accurate and affordable for the FtFabI system. This approach can be implemented in a larger scale benzimidazole scaffold lead optimization against FtFabI. Lastly, the TI results here also provide structure-activity relationship insights, and suggest the parahalogen in benzimidazole compounds might form a weak halogen bond with FabI, which is a well-known halogen bond favoring enzyme. © 2015 Wiley Periodicals, Inc.

  11. An Assessment Model for Energy Efficiency Program Planning in Electric Utilities: Case of the Pacific of Northwest U.S.A

    NASA Astrophysics Data System (ADS)

    Iskin, Ibrahim

    Energy efficiency stands out with its potential to address a number of challenges that today's electric utilities face, including increasing and changing electricity demand, shrinking operating capacity, and decreasing system reliability and flexibility. Being the least cost and least risky alternative, the share of energy efficiency programs in utilities' energy portfolios has been on the rise since the 1980s, and their increasing importance is expected to continue in the future. Despite holding great promise, the ability to determine and invest in only the most promising program alternatives plays a key role in the successful use of energy efficiency as a utility-wide resource. This issue becomes even more significant considering the availability of a vast number of potential energy efficiency programs, the rapidly changing business environment, and the existence of multiple stakeholders. This dissertation introduces hierarchical decision modeling as the framework for energy efficiency program planning in electric utilities. The model focuses on the assessment of emerging energy efficiency programs and proposes to bridge the gap between technology screening and cost/benefit evaluation practices. This approach is expected to identify emerging technology alternatives which have the highest potential to pass cost/benefit ratio testing procedures and contribute to the effectiveness of decision practices in energy efficiency program planning. The model also incorporates rank order analysis and sensitivity analysis for testing the robustness of results from different stakeholder perspectives and future uncertainties in an attempt to enable more informed decision-making practices. The model was applied to the case of 13 high priority emerging energy efficiency program alternatives identified in the Pacific Northwest, U.S.A. The results of this study reveal that energy savings potential is the most important program management consideration in selecting emerging energy efficiency programs. Market dissemination potential and program development and implementation potential are the second and third most important, whereas ancillary benefits potential is the least important program management consideration. The results imply that program value considerations, comprised of energy savings potential and ancillary benefits potential; and program feasibility considerations, comprised of program development and implementation potential and market dissemination potential, have almost equal impacts on assessment of emerging energy efficiency programs. Considering the overwhelming number of value-focused studies and the few feasibility-focused studies in the literature, this finding clearly shows that feasibility-focused studies are greatly understudied. The hierarchical decision model developed in this dissertation is generalizable. Thus, other utilities or power systems can adopt the research steps employed in this study as guidelines and conduct similar assessment studies on emerging energy efficiency programs of their interest.

  12. A Global Review of Incentive Programs to Accelerate Energy-Efficient Appliances and Equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de la Rue du Can, Stephane; Phadke, Amol; Leventis, Greg

    Incentive programs are an essential policy tool to move the market toward energy-efficient products. They offer a favorable complement to mandatory standards and labeling policies by accelerating the market penetration of energy-efficient products above equipment standard requirements and by preparing the market for increased future mandatory requirements. They sway purchase decisions and in some cases production decisions and retail stocking decisions toward energy-efficient products. Incentive programs are structured according to their regulatory environment, the way they are financed, by how the incentive is targeted, and by who administers them. This report categorizes the main elements of incentive programs, using casemore » studies from the Major Economies Forum to illustrate their characteristics. To inform future policy and program design, it seeks to recognize design advantages and disadvantages through a qualitative overview of the variety of programs in use around the globe. Examples range from rebate programs administered by utilities under an Energy-Efficiency Resource Standards (EERS) regulatory framework (California, USA) to the distribution of Eco-Points that reward customers for buying efficient appliances under a government recovery program (Japan). We found that evaluations have demonstrated that financial incentives programs have greater impact when they target highly efficient technologies that have a small market share. We also found that the benefits and drawbacks of different program design aspects depend on the market barriers addressed, the target equipment, and the local market context and that no program design surpasses the others. The key to successful program design and implementation is a thorough understanding of the market and effective identification of the most important local factors hindering the penetration of energy-efficient technologies.« less

  13. 77 FR 19408 - Dynamic Mobility Applications and Data Capture Management Programs; Notice of Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-30

    ... DEPARTMENT OF TRANSPORTATION Dynamic Mobility Applications and Data Capture Management Programs...) Intelligent Transportation System Joint Program Office (ITS JPO) will host a free public meeting to provide stakeholders an update on the Data Capture and Management (DCM) and Dynamic Mobility Applications (DMA...

  14. Multibody dynamics model building using graphical interfaces

    NASA Technical Reports Server (NTRS)

    Macala, Glenn A.

    1989-01-01

    In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.

  15. Earth and ocean dynamics program

    NASA Technical Reports Server (NTRS)

    Vonbun, F. O.

    1976-01-01

    The objectives and requirements of the Earth and Ocean Dynamics Programs are outlined along with major goals and experiments. Spaceborne as well as ground systems needed to accomplish program goals are listed and discussed along with program accomplishments.

  16. Boolean network identification from perturbation time series data combining dynamics abstraction and logic programming.

    PubMed

    Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C

    2016-11-01

    Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Multiscale Hy3S: hybrid stochastic simulation for supercomputers.

    PubMed

    Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2006-02-24

    Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  18. Fluid dynamics computer programs for NERVA turbopump

    NASA Technical Reports Server (NTRS)

    Brunner, J. J.

    1972-01-01

    During the design of the NERVA turbopump, numerous computer programs were developed for the analyses of fluid dynamic problems within the machine. Program descriptions, example cases, users instructions, and listings for the majority of these programs are presented.

  19. MBGD update 2013: the microbial genome database for exploring the diversity of microbial world.

    PubMed

    Uchiyama, Ikuo; Mihara, Motohiro; Nishide, Hiroyo; Chiba, Hirokazu

    2013-01-01

    The microbial genome database for comparative analysis (MBGD, available at http://mbgd.genome.ad.jp/) is a platform for microbial genome comparison based on orthology analysis. As its unique feature, MBGD allows users to conduct orthology analysis among any specified set of organisms; this flexibility allows MBGD to adapt to a variety of microbial genomic study. Reflecting the huge diversity of microbial world, the number of microbial genome projects now becomes several thousands. To efficiently explore the diversity of the entire microbial genomic data, MBGD now provides summary pages for pre-calculated ortholog tables among various taxonomic groups. For some closely related taxa, MBGD also provides the conserved synteny information (core genome alignment) pre-calculated using the CoreAligner program. In addition, efficient incremental updating procedure can create extended ortholog table by adding additional genomes to the default ortholog table generated from the representative set of genomes. Combining with the functionalities of the dynamic orthology calculation of any specified set of organisms, MBGD is an efficient and flexible tool for exploring the microbial genome diversity.

  20. The International Atomic Energy Agency software package for the analysis of scintigraphic renal dynamic studies: a tool for the clinician, teacher, and researcher.

    PubMed

    Zaknun, John J; Rajabi, Hossein; Piepsz, Amy; Roca, Isabel; Dondi, Maurizio

    2011-01-01

    Under the auspices of the International Atomic Energy Agency, a new-generation, platform-independent, and x86-compatible software package was developed for the analysis of scintigraphic renal dynamic imaging studies. It provides nuclear medicine professionals cost-free access to the most recent developments in the field. The software package is a step forward towards harmonization and standardization. Embedded functionalities render it a suitable tool for education, research, and for receiving distant expert's opinions. Another objective of this effort is to allow introducing clinically useful parameters of drainage, including normalized residual activity and outflow efficiency. Furthermore, it provides an effective teaching tool for young professionals who are being introduced to dynamic kidney studies by selected teaching case studies. The software facilitates a better understanding through practically approaching different variables and settings and their effect on the numerical results. An effort was made to introduce instruments of quality assurance at the various levels of the program's execution, including visual inspection and automatic detection and correction of patient's motion, automatic placement of regions of interest around the kidneys, cortical regions, and placement of reproducible background region on both primary dynamic and on postmicturition studies. The user can calculate the differential renal function through 2 independent methods, the integral or the Rutland-Patlak approaches. Standardized digital reports, storage and retrieval of regions of interest, and built-in database operations allow the generation and tracing of full image reports and of numerical outputs. The software package is undergoing quality assurance procedures to verify the accuracy and the interuser reproducibility with the final aim of launching the program for use by professionals and teaching institutions worldwide. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. A digital computer program for the dynamic interaction simulation of controls and structure (DISCOS), volume 1

    NASA Technical Reports Server (NTRS)

    Bodley, C. S.; Devers, A. D.; Park, A. C.; Frisch, H. P.

    1978-01-01

    A theoretical development and associated digital computer program system for the dynamic simulation and stability analysis of passive and actively controlled spacecraft are presented. The dynamic system (spacecraft) is modeled as an assembly of rigid and/or flexible bodies not necessarily in a topological tree configuration. The computer program system is used to investigate total system dynamic characteristics, including interaction effects between rigid and/or flexible bodies, control systems, and a wide range of environmental loadings. In addition, the program system is used for designing attitude control systems and for evaluating total dynamic system performance, including time domain response and frequency domain stability analyses.

  2. Program of Research in Structures and Dynamics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The Structures and Dynamics Program was first initiated in 1972 with the following two major objectives: to provide a basic understanding and working knowledge of some key areas pertinent to structures, solid mechanics, and dynamics technology including computer aided design; and to provide a comprehensive educational and research program at the NASA Langley Research Center leading to advanced degrees in the structures and dynamics areas. During the operation of the program the research work was done in support of the activities of both the Structures and Dynamics Division and the Loads and Aeroelasticity Division. During the period of 1972 to 1986 the Program provided support for two full-time faculty members, one part-time faculty member, three postdoctoral fellows, one research engineer, eight programmers, and 28 graduate research assistants. The faculty and staff of the program have published 144 papers and reports, and made 70 presentations at national and international meetings, describing their research findings. In addition, they organized and helped in the organization of 10 workshops and national symposia in the structures and dynamics areas. The graduate research assistants and the students enrolled in the program have written 20 masters theses and 2 doctoral dissertations. The overall progress is summarized.

  3. UmUTracker: A versatile MATLAB program for automated particle tracking of 2D light microscopy or 3D digital holography data

    NASA Astrophysics Data System (ADS)

    Zhang, Hanqing; Stangner, Tim; Wiklund, Krister; Rodriguez, Alvaro; Andersson, Magnus

    2017-10-01

    We present a versatile and fast MATLAB program (UmUTracker) that automatically detects and tracks particles by analyzing video sequences acquired by either light microscopy or digital in-line holographic microscopy. Our program detects the 2D lateral positions of particles with an algorithm based on the isosceles triangle transform, and reconstructs their 3D axial positions by a fast implementation of the Rayleigh-Sommerfeld model using a radial intensity profile. To validate the accuracy and performance of our program, we first track the 2D position of polystyrene particles using bright field and digital holographic microscopy. Second, we determine the 3D particle position by analyzing synthetic and experimentally acquired holograms. Finally, to highlight the full program features, we profile the microfluidic flow in a 100 μm high flow chamber. This result agrees with computational fluid dynamic simulations. On a regular desktop computer UmUTracker can detect, analyze, and track multiple particles at 5 frames per second for a template size of 201 ×201 in a 1024 × 1024 image. To enhance usability and to make it easy to implement new functions we used object-oriented programming. UmUTracker is suitable for studies related to: particle dynamics, cell localization, colloids and microfluidic flow measurement. Program Files doi : http://dx.doi.org/10.17632/fkprs4s6xp.1 Licensing provisions : Creative Commons by 4.0 (CC by 4.0) Programming language : MATLAB Nature of problem: 3D multi-particle tracking is a common technique in physics, chemistry and biology. However, in terms of accuracy, reliable particle tracking is a challenging task since results depend on sample illumination, particle overlap, motion blur and noise from recording sensors. Additionally, the computational performance is also an issue if, for example, a computationally expensive process is executed, such as axial particle position reconstruction from digital holographic microscopy data. Versatile robust tracking programs handling these concerns and providing a powerful post-processing option are significantly limited. Solution method: UmUTracker is a multi-functional tool to extract particle positions from long video sequences acquired with either light microscopy or digital holographic microscopy. The program provides an easy-to-use graphical user interface (GUI) for both tracking and post-processing that does not require any programming skills to analyze data from particle tracking experiments. UmUTracker first conduct automatic 2D particle detection even under noisy conditions using a novel circle detector based on the isosceles triangle sampling technique with a multi-scale strategy. To reduce the computational load for 3D tracking, it uses an efficient implementation of the Rayleigh-Sommerfeld light propagation model. To analyze and visualize the data, an efficient data analysis step, which can for example show 4D flow visualization using 3D trajectories, is included. Additionally, UmUTracker is easy to modify with user-customized modules due to the object-oriented programming style Additional comments: Program obtainable from https://sourceforge.net/projects/umutracker/

  4. Bellman’s GAP—a language and compiler for dynamic programming in sequence analysis

    PubMed Central

    Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert

    2013-01-01

    Motivation: Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman’s GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. Results: In Bellman’s GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman’s GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman’s GAP as an implementation platform of ‘real-world’ bioinformatics tools. Availability: Bellman’s GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics. Contact: robert@techfak.uni-bielefeld.de Supplementary information: Supplementary data are available at Bioinformatics online PMID:23355290

  5. Use of a Computer Language in Teaching Dynamic Programming. Final Report.

    ERIC Educational Resources Information Center

    Trimble, C. J.; And Others

    Most optimization problems of any degree of complexity must be solved using a computer. In the teaching of dynamic programing courses, it is often desirable to use a computer in problem solution. The solution process involves conceptual formulation and computational Solution. Generalized computer codes for dynamic programing problem solution…

  6. A Fully Reconfigurable Low-Noise Biopotential Sensing Amplifier With 1.96 Noise Efficiency Factor.

    PubMed

    Tzu-Yun Wang; Min-Rui Lai; Twigg, Christopher M; Sheng-Yu Peng

    2014-06-01

    A fully reconfigurable biopotential sensing amplifier utilizing floating-gate transistors is presented in this paper. By using the complementary differential pairs along with the current reuse technique, the theoretical limit for the noise efficiency factor of the proposed amplifier is below 1.5. Without consuming any extra power, floating-gate transistors are employed to program the low-frequency cutoff corner of the amplifier and to implement the common-mode feedback. A concept proving prototype chip was designed and fabricated in a 0.35 μm CMOS process occupying 0.17 mm (2) silicon area. With a supply voltage of 2.5 V, the measured midband gain is 40.7 dB and the measured input-referred noise is 2.8 μVrms. The chip was tested under several configurations with the amplifier bandwidth being programmed to 100 Hz, 1 kHz , and 10 kHz. The measured noise efficiency factors in these bandwidth settings are 1.96, 2.01, and 2.25, respectively, which are among the best numbers reported to date. The measured common-mode rejection and the supply rejection are above 70 dB . When the bandwidth is configured to be 10 kHz, the dynamic range measured at 1 kHz is 60 dB with total harmonic distortion less than 0.1%. The proposed amplifier is also demonstrated by recording electromyography (EMG), electrocardiography (ECG), electrooculography (EOG), and electroencephalography (EEG) signals from human bodies.

  7. Newly emerging resource efficiency manager programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, S.; Howell, C.

    1997-12-31

    Many facilities in the northwest such as K--12 schools, community colleges, and military installations are implementing resource-efficiency awareness programs. These programs are generally referred to as resource efficiency manager (REM) or resource conservation manager (RCM) programs. Resource efficiency management is a systems approach to managing a facility`s energy, water, and solid waste. Its aim is to reduce utility budgets by focusing on behavioral changes, maintenance and operation procedures, resource accounting, education and training, and a comprehensive awareness campaign that involves everyone in the organization.

  8. Interfacing comprehensive rotorcraft analysis with advanced aeromechanics and vortex wake models

    NASA Astrophysics Data System (ADS)

    Liu, Haiying

    This dissertation describes three aspects of the comprehensive rotorcraft analysis. First, a physics-based methodology for the modeling of hydraulic devices within multibody-based comprehensive models of rotorcraft systems is developed. This newly proposed approach can predict the fully nonlinear behavior of hydraulic devices, and pressure levels in the hydraulic chambers are coupled with the dynamic response of the system. The proposed hydraulic device models are implemented in a multibody code and calibrated by comparing their predictions with test bench measurements for the UH-60 helicopter lead-lag damper. Predicted peak damping forces were found to be in good agreement with measurements, while the model did not predict the entire time history of damper force to the same level of accuracy. The proposed model evaluates relevant hydraulic quantities such as chamber pressures, orifice flow rates, and pressure relief valve displacements. This model could be used to design lead-lag dampers with desirable force and damping characteristics. The second part of this research is in the area of computational aeroelasticity, in which an interface between computational fluid dynamics (CFD) and computational structural dynamics (CSD) is established. This interface enables data exchange between CFD and CSD with the goal of achieving accurate airloads predictions. In this work, a loose coupling approach based on the delta-airloads method is developed in a finite-element method based multibody dynamics formulation, DYMORE. To validate this aerodynamic interface, a CFD code, OVERFLOW-2, is loosely coupled with a CSD program, DYMORE, to compute the airloads of different flight conditions for Sikorsky UH-60 aircraft. This loose coupling approach has good convergence characteristics. The predicted airloads are found to be in good agreement with the experimental data, although not for all flight conditions. In addition, the tight coupling interface between the CFD program, OVERFLOW-2, and the CSD program, DYMORE, is also established. The ability to accurately capture the wake structure around a helicopter rotor is crucial for rotorcraft performance analysis. In the third part of this thesis, a new representation of the wake vortex structure based on Non-Uniform Rational B-Spline (NURBS) curves and surfaces is proposed to develop an efficient model for prescribed and free wakes. NURBS curves and surfaces are able to represent complex shapes with remarkably little data. The proposed formulation has the potential to reduce the computational cost associated with the use of Helmholtz's law and the Biot-Savart law when calculating the induced flow field around the rotor. An efficient free-wake analysis will considerably decrease the computational cost of comprehensive rotorcraft analysis, making the approach more attractive to routine use in industrial settings.

  9. An Efficiency Comparison of MBA Programs: Top 10 versus Non-Top 10

    ERIC Educational Resources Information Center

    Hsu, Maxwell K.; James, Marcia L.; Chao, Gary H.

    2009-01-01

    The authors compared the cohort group of the top-10 MBA programs in the United States with their lower-ranking counterparts on their value-added efficiency. The findings reveal that the top-10 MBA programs in the United States are associated with statistically higher average "technical and scale efficiency" and "scale efficiency", but not with a…

  10. Efficient digital implementation of a conductance-based globus pallidus neuron and the dynamics analysis

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Wei, Xile; Deng, Bin; Liu, Chen; Li, Huiyan; Wang, Jiang

    2018-03-01

    Balance between biological plausibility of dynamical activities and computational efficiency is one of challenging problems in computational neuroscience and neural system engineering. This paper proposes a set of efficient methods for the hardware realization of the conductance-based neuron model with relevant dynamics, targeting reproducing the biological behaviors with low-cost implementation on digital programmable platform, which can be applied in wide range of conductance-based neuron models. Modified GP neuron models for efficient hardware implementation are presented to reproduce reliable pallidal dynamics, which decode the information of basal ganglia and regulate the movement disorder related voluntary activities. Implementation results on a field-programmable gate array (FPGA) demonstrate that the proposed techniques and models can reduce the resource cost significantly and reproduce the biological dynamics accurately. Besides, the biological behaviors with weak network coupling are explored on the proposed platform, and theoretical analysis is also made for the investigation of biological characteristics of the structured pallidal oscillator and network. The implementation techniques provide an essential step towards the large-scale neural network to explore the dynamical mechanisms in real time. Furthermore, the proposed methodology enables the FPGA-based system a powerful platform for the investigation on neurodegenerative diseases and real-time control of bio-inspired neuro-robotics.

  11. Optimal dynamic control of invasions: applying a systematic conservation approach.

    PubMed

    Adams, Vanessa M; Setterfield, Samantha A

    2015-06-01

    The social, economic, and environmental impacts of invasive plants are well recognized. However, these variable impacts are rarely accounted for in the spatial prioritization of funding for weed management. We examine how current spatially explicit prioritization methods can be extended to identify optimal budget allocations to both eradication and control measures of invasive species to minimize the costs and likelihood of invasion. Our framework extends recent approaches to systematic prioritization of weed management to account for multiple values that are threatened by weed invasions with a multi-year dynamic prioritization approach. We apply our method to the northern portion of the Daly catchment in the Northern Territory, which has significant conservation values that are threatened by gamba grass (Andropogon gayanus), a highly invasive species recognized by the Australian government as a Weed of National Significance (WONS). We interface Marxan, a widely applied conservation planning tool, with a dynamic biophysical model of gamba grass to optimally allocate funds to eradication and control programs under two budget scenarios comparing maximizing gain (MaxGain) and minimizing loss (MinLoss) optimization approaches. The prioritizations support previous findings that a MinLoss approach is a better strategy when threats are more spatially variable than conservation values. Over a 10-year simulation period, we find that a MinLoss approach reduces future infestations by ~8% compared to MaxGain in the constrained budget scenarios and ~12% in the unlimited budget scenarios. We find that due to the extensive current invasion and rapid rate of spread, allocating the annual budget to control efforts is more efficient than funding eradication efforts when there is a constrained budget. Under a constrained budget, applying the most efficient optimization scenario (control, minloss) reduces spread by ~27% compared to no control. Conversely, if the budget is unlimited it is more efficient to fund eradication efforts and reduces spread by ~65% compared to no control.

  12. Representing and reasoning about program in situation calculus

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Zhang, Ming-yi; Wu, Mao-nian; Xie, Gang

    2011-12-01

    Situation calculus is an expressive tool for modeling dynamical system in artificial intelligence, changes in a dynamical world is represented naturally by the notions of action, situation and fluent in situation calculus. Program can be viewed as a discrete dynamical system, so it is possible to model program with situation calculus. To model program written in a smaller core programming language CL, notion of fluent is expanded for representing value of expression. Together with some functions returning concerned objects from expressions, a basic action theory of CL programming is constructed. Under such a theory, some properties of program, such as correctness and termination can be reasoned about.

  13. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  14. Configuring Airspace Sectors with Approximate Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  15. Dynamics in atomic signaling games.

    PubMed

    Fox, Michael J; Touri, Behrouz; Shamma, Jeff S

    2015-07-07

    We study an atomic signaling game under stochastic evolutionary dynamics. There are a finite number of players who repeatedly update from a finite number of available languages/signaling strategies. Players imitate the most fit agents with high probability or mutate with low probability. We analyze the long-run distribution of states and show that, for sufficiently small mutation probability, its support is limited to efficient communication systems. We find that this behavior is insensitive to the particular choice of evolutionary dynamic, a property that is due to the game having a potential structure with a potential function corresponding to average fitness. Consequently, the model supports conclusions similar to those found in the literature on language competition. That is, we show that efficient languages eventually predominate the society while reproducing the empirical phenomenon of linguistic drift. The emergence of efficiency in the atomic case can be contrasted with results for non-atomic signaling games that establish the non-negligible possibility of convergence, under replicator dynamics, to states of unbounded efficiency loss. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Hydraulic dynamic analysis

    NASA Technical Reports Server (NTRS)

    Gale, R. L.; Nease, A. W.; Nelson, D. J.

    1978-01-01

    Computer program mathematically describes complete hydraulic systems to study their dynamic performance. Program employs subroutines that simulate components of hydraulic system, which are then controlled by main program. Program is useful to engineers working with detailed performance results of aircraft, spacecraft, or similar hydraulic systems.

  17. Trends in Programming Languages for Neuroscience Simulations

    PubMed Central

    Davison, Andrew P.; Hines, Michael L.; Muller, Eilif

    2009-01-01

    Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing. PMID:20198154

  18. Trends in programming languages for neuroscience simulations.

    PubMed

    Davison, Andrew P; Hines, Michael L; Muller, Eilif

    2009-01-01

    Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing.

  19. A decoupled recursive approach for constrained flexible multibody system dynamics

    NASA Technical Reports Server (NTRS)

    Lai, Hao-Jan; Kim, Sung-Soo; Haug, Edward J.; Bae, Dae-Sung

    1989-01-01

    A variational-vector calculus approach is employed to derive a recursive formulation for dynamic analysis of flexible multibody systems. Kinematic relationships for adjacent flexible bodies are derived in a companion paper, using a state vector notation that represents translational and rotational components simultaneously. Cartesian generalized coordinates are assigned for all body and joint reference frames, to explicitly formulate deformation kinematics under small deformation kinematics and an efficient flexible dynamics recursive algorithm is developed. Dynamic analysis of a closed loop robot is performed to illustrate efficiency of the algorithm.

  20. Mathematical modeling and simulation of a thermal system

    NASA Astrophysics Data System (ADS)

    Toropoc, Mirela; Gavrila, Camelia; Frunzulica, Rodica; Toma, Petrica D.

    2016-12-01

    The aim of the present paper is the conception of a mathematical model and simulation of a system formed by a heatexchanger for domestic hot water preparation, a storage tank for hot water and a radiator, starting from the mathematical equations describing this system and developed using Scilab-Xcos program. The model helps to determine the evolution in time for the hot water temperature, for the return temperature in the primary circuit of the heat exchanger, for the supply temperature in the secondary circuit, the thermal power for heating and for hot water preparation to the consumer respectively. In heating systems, heat-exchangers have an important role and their performances influence the energy efficiency of the systems. In the meantime, it is very important to follow the behavior of such systems in dynamic regimes. Scilab-Xcos program can be utilized to follow the important parameters of the systems in different functioning scenarios.

  1. Operational efficiency subpanel advanced mission control

    NASA Technical Reports Server (NTRS)

    Friedland, Peter

    1990-01-01

    Herein, the term mission control will be taken quite broadly to include both ground and space based operations as well as the information infrastructure necessary to support such operations. Three major technology areas related to advanced mission control are examined: (1) Intelligent Assistance for Ground-Based Mission Controllers and Space-Based Crews; (2) Autonomous Onboard Monitoring, Control and Fault Detection Isolation and Reconfiguration; and (3) Dynamic Corporate Memory Acquired, Maintained, and Utilized During the Entire Vehicle Life Cycle. The current state of the art space operations are surveyed both within NASA and externally for each of the three technology areas and major objectives are discussed from a user point of view for technology development. Ongoing NASA and other governmental programs are described. An analysis of major research issues and current holes in the program are provided. Several recommendations are presented for enhancing the technology development and insertion process to create advanced mission control environments.

  2. [Dynamics of tooth decay prevalence in children receiving long-term preventive program in school dental facilities].

    PubMed

    Avraamova, O G; Kulazhenko, T V; Gabitova, K F

    2016-01-01

    The paper presents the assessment of tooth decay prevalence in clinically homogenous groups of children receiving long-term preventive program (PP) in school dental facilities. Five-years PP were introduced in clinical practice in 2 Moscow schools. Preventive treatment was performed by dental hygienist. The results show that systematic preventive treatment in school dental offices starting from elementary school allows reducing dental caries incidence 46-53% and stabilize the incidence of caries complications. It should be mentioned though that analysis of individualized outcomes proves heterogeneity of study results despite of equal conditions of PP. Potentially significant hence is early diagnostics and treatment of initial caries forms as demineralization foci, especially in children with intensive tooth decay. Optimization of pediatric dentist and dental hygienist activity in school dental facilities is the main factor of caries prevention efficiency.

  3. Solar x ray astronomy rocket program

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The dynamics were studied of the solar corona through the imaging of large scale coronal structures with AS&E High Resolution Soft X ray Imaging Solar Sounding Rocket Payload. The proposal for this program outlined a plan of research based on the construction of a high sensitivity X ray telescope from the optical and electronic components of the previous flight of this payload (36.038CS). Specifically, the X ray sensitive CCD camera was to be placed in the prime focus of the grazing incidence X ray mirror. The improved quantum efficiency of the CCD detector (over the film which had previously been used) allows quantitative measurements of temperature and emission measure in regions of low x ray emission such as helmet streamers beyond 1.2 solar radii or coronal holes. Furthermore, the improved sensitivity of the CCD allows short exposures of bright objects to study unexplored temporal regimes of active region loop evolution.

  4. Turbine Blade and Endwall Heat Transfer Measured in NASA Glenn's Transonic Turbine Blade Cascade

    NASA Technical Reports Server (NTRS)

    Giel, Paul W.

    2000-01-01

    Higher operating temperatures increase the efficiency of aircraft gas turbine engines, but can also degrade internal components. High-pressure turbine blades just downstream of the combustor are particularly susceptible to overheating. Computational fluid dynamics (CFD) computer programs can predict the flow around the blades so that potential hot spots can be identified and appropriate cooling schemes can be designed. Various blade and cooling schemes can be examined computationally before any hardware is built, thus saving time and effort. Often though, the accuracy of these programs has been found to be inadequate for predicting heat transfer. Code and model developers need highly detailed aerodynamic and heat transfer data to validate and improve their analyses. The Transonic Turbine Blade Cascade was built at the NASA Glenn Research Center at Lewis Field to help satisfy the need for this type of data.

  5. Experiments on the properties of superfluid helium in zero gravity

    NASA Technical Reports Server (NTRS)

    Mason, P.; Collins, D.; Petrac, D.; Yang, L.; Edeskuty, F.; Williamson, K.

    1976-01-01

    The paper describes a research program designed to study the behavior of superfluid liquid helium in low and zero gravity in order to determine the properties which are critically important to its use as a stored cryogen for cooling scientific instruments aboard spacecraft for periods up to several months. The experiment program consists of a series of flights of an experiment package on a free-fall trajectory both on an aircraft and on a rocket. The objectives are to study thickness of thin films of helium as a function of acceleration, heat transfer in thin films, heat transfer across copper-liquid helium interfaces, fluid dynamics of bulk helium in high and low accelerations and under various conditions of rotations, alternate methods of separation of liquid and vapor phases and of efficient venting of the vapor, and undesirable thermomechanical oscillations in the vent pipes. Preliminary results from aircraft tests are discussed.

  6. Probabilistic Structural Analysis Theory Development

    NASA Technical Reports Server (NTRS)

    Burnside, O. H.

    1985-01-01

    The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.

  7. Computer simulations of planetary accretion dynamics: Sensitivity to initial conditions

    NASA Technical Reports Server (NTRS)

    Isaacman, R.; Sagan, C.

    1976-01-01

    The implications and limitations of program ACRETE were tested. The program is a scheme based on Newtonian physics and accretion with unit sticking efficiency, devised to simulate the origin of the planets. The dependence of the results on a variety of radial and vertical density distribution laws, the ratio of gas to dust in the solar nebula, the total nebular mass, and the orbital eccentricity of the accreting grains was explored. Only for a small subset of conceivable cases are planetary systems closely like our own generated. Many models have tendencies towards one of two preferred configurations: multiple star systems, or planetary systems in which Jovian planets either have substantially smaller masses than in our system or are absent altogether. But for a wide range of cases recognizable planetary systems are generated - ranging from multiple star systems with accompanying planets, to systems with Jovian planets at several hundred AU, to single stars surrounded only by asteroids.

  8. An optimization model for energy generation and distribution in a dynamic facility

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1981-01-01

    An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.

  9. User's guide to the western spruce budworm modeling system

    Treesearch

    Nicholas L. Crookston; J. J. Colbert; Paul W. Thomas; Katharine A. Sheehan; William P. Kemp

    1990-01-01

    The Budworm Modeling System is a set of four computer programs: The Budworm Dynamics Model, the Prognosis-Budworm Dynamics Model, the Prognosis-Budworm Damage Model, and the Parallel Processing-Budworm Dynamics Model. Input to the first three programs and the output produced are described in this guide. A guide to the fourth program will be published separately....

  10. Space station structures and dynamics test program

    NASA Technical Reports Server (NTRS)

    Moore, Carleton J.; Townsend, John S.; Ivey, Edward W.

    1987-01-01

    The design, construction, and operation of a low-Earth orbit space station poses unique challenges for development and implementation of new technology. The technology arises from the special requirement that the station be built and constructed to function in a weightless environment, where static loads are minimal and secondary to system dynamics and control problems. One specific challenge confronting NASA is the development of a dynamics test program for: (1) defining space station design requirements, and (2) identifying the characterizing phenomena affecting the station's design and development. A general definition of the space station dynamic test program, as proposed by MSFC, forms the subject of this report. The test proposal is a comprehensive structural dynamics program to be launched in support of the space station. The test program will help to define the key issues and/or problems inherent to large space structure analysis, design, and testing. Development of a parametric data base and verification of the math models and analytical analysis tools necessary for engineering support of the station's design, construction, and operation provide the impetus for the dynamics test program. The philosophy is to integrate dynamics into the design phase through extensive ground testing and analytical ground simulations of generic systems, prototype elements, and subassemblies. On-orbit testing of the station will also be used to define its capability.

  11. Differential results integrated with continuous and discrete gravity measurements between nearby stations

    NASA Astrophysics Data System (ADS)

    Xu, Weimin; Chen, Shi; Lu, Hongyan

    2016-04-01

    Integrated gravity is an efficient way in studying spatial and temporal characteristics of the dynamics and tectonics. Differential measurements based on the continuous and discrete gravity observations shows highly competitive in terms of both efficiency and precision with single result. The differential continuous gravity variation between the nearby stations, which is based on the observation of Scintrex g-Phone relative gravimeters in every single station. It is combined with the repeated mobile relative measurements or absolute results to study the regional integrated gravity changes. Firstly we preprocess the continuous records by Tsoft software, and calculate the theoretical earth tides and ocean tides by "MT80TW" program through high precision tidal parameters from "WPARICET". The atmospheric loading effects and complex drift are strictly considered in the procedure. Through above steps we get the continuous gravity in every station and we can calculate the continuous gravity variation between nearby stations, which is called the differential continuous gravity changes. Then the differential results between related stations is calculated based on the repeated gravity measurements, which are carried out once or twice every year surrounding the gravity stations. Hence we get the discrete gravity results between the nearby stations. Finally, the continuous and discrete gravity results are combined in the same related stations, including the absolute gravity results if necessary, to get the regional integrated gravity changes. This differential gravity results is more accurate and effective in dynamical monitoring, regional hydrologic effects studying, tectonic activity and other geodynamical researches. The time-frequency characteristics of continuous gravity results are discussed to insure the accuracy and efficiency in the procedure.

  12. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over several simulation timesteps. One MD application described here highlights the utility of including long range contributions to Lennard-Jones potential in constant pressure simulations. Another application shows the time dependence of long range forces in a multiple time step MD simulation.

  13. 77 FR 54839 - Energy Efficiency and Conservation Loan Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-06

    ... CFR Parts 1710, 1717, 1721, 1724, and 1730 RIN 0572-AC19 Energy Efficiency and Conservation Loan..., proposing policies and procedures for loan and guarantee financial assistance in support of energy efficiency programs (EE Programs) sponsored and implemented by electric utilities for the benefit of rural...

  14. Health dynamics: implications for efficiency and equity in priority setting.

    PubMed

    Hauck, Katharina; Tsuchiya, Aki

    2011-01-01

    Health dynamics are intertemporal fluctuations in health status of an individual or a group of individuals. It has been found in empirical studies of health inequalities that health dynamics can differ systematically across subgroups, even if prevalence measured at one point in time is the same. We explore the relevance of the concept of health dynamics in the context of cost-effectiveness analysis. Although economic evaluation takes health dynamics into account where they matter in terms of efficiency, we find that it fails to take into account the equity dimensions of health dynamics. In addition, the political implications of health dynamics may influence resource allocation decisions, possibly in opposing directions. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  15. Simulating the dynamic behavior of chain drive systems by advanced CAE programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, J.; Meyer, J.

    1996-09-01

    Due to the increased requirements for chain drive systems of 4-stroke internal combustion engines CAE-tools are necessary to design the optimum dynamic system. In comparison to models used din the past the advantage of the new model CDD (Chain Drive Dynamics) is the capability of simulating the trajectory of each chain link around the drive system. Each chain link is represented by a mass with two degrees of freedom and is coupled to the next by a spring-damper element. The drive sprocket can be moved with a constant or non-constant speed. As in reality the other sprockets are driven bymore » the running chain and can be excited by torques. Due to these unique model features it is possible to calculate all vibration types of the chain, polygon effects and radial or angular vibrations of the sprockets very accurately. The model includes the detailed simulation of a mechanical or a hydraulic tensioner as well. The method is ready to be coupled to other detailed calculation models (e.g. valve train systems, crankshaft, etc.). The high efficiency of the tool predicting the dynamic and acoustic behavior of a chain drive system will be demonstrated in comparison to measurements.« less

  16. Dynamic systems and the role of evaluation: The case of the Green Communities project.

    PubMed

    Anzoise, Valentina; Sardo, Stefania

    2016-02-01

    The crucial role evaluation can play in the co-development of project design and its implementation will be addressed through the analysis of a case study, the Green Communities (GC) project, funded by the Italian Ministry of Environment within the EU Interregional Operational Program (2007-2013) "Renewable Energy and Energy Efficiency". The project's broader goals included an attempt to trigger a change in Italian local development strategies, especially for mountain and inland areas, which would be tailored to the real needs of communities, and based on a sustainable exploitation and management of the territorial assets. The goal was not achieved, and this paper addresses the issues of how GC could have been more effective in fostering a vision of change, and which design adaptations and evaluation procedures would have allowed the project to better cope with the unexpected consequences and resistances it encountered. The conclusions drawn are that projects should be conceived, designed and carried out as dynamic systems, inclusive of a dynamic and engaged evaluation enabling the generation of feedbacks loops, iteratively interpreting the narratives and dynamics unfolding within the project, and actively monitoring the potential of various relationships among project participants for generating positive social change. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Energy efficiency in nonprofit agencies: Creating effective program models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, M.A.; Prindle, B.; Scherr, M.I.

    Nonprofit agencies are a critical component of the health and human services system in the US. It has been clearly demonstrated by programs that offer energy efficiency services to nonprofits that, with minimal investment, they can educe their energy consumption by ten to thirty percent. This energy conservation potential motivated the Department of Energy and Oak Ridge National Laboratory to conceive a project to help states develop energy efficiency programs for nonprofits. The purpose of the project was two-fold: (1) to analyze existing programs to determine which design and delivery mechanisms are particularly effective, and (2) to create model programsmore » for states to follow in tailoring their own plans for helping nonprofits with energy efficiency programs. Twelve existing programs were reviewed, and three model programs were devised and put into operation. The model programs provide various forms of financial assistance to nonprofits and serve as a source of information on energy efficiency as well. After examining the results from the model programs (which are still on-going) and from the existing programs, several replicability factors'' were developed for use in the implementation of programs by other states. These factors -- some concrete and practical, others more generalized -- serve as guidelines for states devising program based on their own particular needs and resources.« less

  18. A user's guide to the Flexible Spacecraft Dynamics and Control Program

    NASA Technical Reports Server (NTRS)

    Fedor, J. V.

    1984-01-01

    A guide to the use of the Flexible Spacecraft Dynamics Program (FSD) is presented covering input requirements, control words, orbit generation, spacecraft description and simulation options, and output definition. The program can be used in dynamics and control analysis as well as in orbit support of deployment and control of spacecraft. The program is applicable to inertially oriented spinning, Earth oriented or gravity gradient stabilized spacecraft. Internal and external environmental effects can be simulated.

  19. Reducing inhomogeneity in the dynamic properties of quantum dots via self-aligned plasmonic cavities

    NASA Astrophysics Data System (ADS)

    Demory, Brandon; Hill, Tyler A.; Teng, Chu-Hsiang; Deng, Hui; Ku, P. C.

    2018-01-01

    A plasmonic cavity is shown to greatly reduce the inhomogeneity of dynamic optical properties such as quantum efficiency and radiative lifetime of InGaN quantum dots. By using an open-top plasmonic cavity structure, which exhibits a large Purcell factor and antenna quantum efficiency, the resulting quantum efficiency distribution for the quantum dots narrows and is no longer limited by the quantum dot inhomogeneity. The standard deviation of the quantum efficiency can be reduced to 2% while maintaining the overall quantum efficiency at 70%, making InGaN quantum dots a viable candidate for high-speed quantum cryptography and random number generation applications.

  20. Reducing inhomogeneity in the dynamic properties of quantum dots via self-aligned plasmonic cavities.

    PubMed

    Demory, Brandon; Hill, Tyler A; Teng, Chu-Hsiang; Deng, Hui; Ku, P C

    2018-01-05

    A plasmonic cavity is shown to greatly reduce the inhomogeneity of dynamic optical properties such as quantum efficiency and radiative lifetime of InGaN quantum dots. By using an open-top plasmonic cavity structure, which exhibits a large Purcell factor and antenna quantum efficiency, the resulting quantum efficiency distribution for the quantum dots narrows and is no longer limited by the quantum dot inhomogeneity. The standard deviation of the quantum efficiency can be reduced to 2% while maintaining the overall quantum efficiency at 70%, making InGaN quantum dots a viable candidate for high-speed quantum cryptography and random number generation applications.

  1. User's guide for a computer program to analyze the LRC 16 ft transonic dynamics tunnel cable mount system

    NASA Technical Reports Server (NTRS)

    Barbero, P.; Chin, J.

    1973-01-01

    The theoretical derivation of the set of equations is discussed which is applicable to modeling the dynamic characteristics of aeroelastically-scaled models flown on the two-cable mount system in a 16 ft transonic dynamics tunnel. The computer program provided for the analysis is also described. The program calculates model trim conditions as well as 3 DOF longitudinal and lateral/directional dynamic conditions for various flying cable and snubber cable configurations. Sample input and output are included.

  2. Bistability of Evolutionary Stable Vaccination Strategies in the Reinfection SIRI Model.

    PubMed

    Martins, José; Pinto, Alberto

    2017-04-01

    We use the reinfection SIRI epidemiological model to analyze the impact of education programs and vaccine scares on individuals decisions to vaccinate or not. The presence of the reinfection provokes the novelty of the existence of three Nash equilibria for the same level of the morbidity relative risk instead of a single Nash equilibrium as occurs in the SIR model studied by Bauch and Earn (PNAS 101:13391-13394, 2004). The existence of three Nash equilibria, with two of them being evolutionary stable, introduces two scenarios with relevant and opposite features for the same level of the morbidity relative risk: the low-vaccination scenario corresponding to the evolutionary stable vaccination strategy, where individuals will vaccinate with a low probability; and the high-vaccination scenario corresponding to the evolutionary stable vaccination strategy, where individuals will vaccinate with a high probability. We introduce the evolutionary vaccination dynamics for the SIRI model and we prove that it is bistable. The bistability of the evolutionary dynamics indicates that the damage provoked by false scares on the vaccination perceived morbidity risks can be much higher and much more persistent than in the SIR model. Furthermore, the vaccination education programs to be efficient they need to implement a mechanism to suddenly increase the vaccination coverage level.

  3. Influence of typical faults over the dynamic behavior of pantograph-catenary contact force in electric rail transport

    NASA Astrophysics Data System (ADS)

    Rusu-Anghel, S.; Ene, A.

    2017-05-01

    The quality of electric energy capture and also the equipment operational safety depend essentially of the technical state of the contact line (CL). The present method for determining the technical state of CL based on advance programming is no longer efficient, due to the faults which can occur into the not programmed areas. Therefore, they cannot be remediated. It is expected another management method for the repairing and maintenance of CL based on its real state which must be very well known. In this paper a new method for determining the faults in CL is described. It is based on the analysis of the variation of pantograph-CL contact force in dynamical regime. Using mathematical modelling and also experimental tests, it was established that each type of fault is able to generate ‘signatures’ into the contact force diagram. The identification of these signatures can be accomplished by an informatics system which will provide the fault location, its type and also in the future, the probable evolution of the CL technical state. The measuring of the contact force is realized in optical manner using a railway inspection trolley which has appropriate equipment. The analysis of the desired parameters can be accomplished in real time by a data acquisition system, based on dedicated software.

  4. Sequential Multiple Assignment Randomized Trial (SMART) with Adaptive Randomization for Quality Improvement in Depression Treatment Program

    PubMed Central

    Chakraborty, Bibhas; Davidson, Karina W.

    2015-01-01

    Summary Implementation study is an important tool for deploying state-of-the-art treatments from clinical efficacy studies into a treatment program, with the dual goals of learning about effectiveness of the treatments and improving the quality of care for patients enrolled into the program. In this article, we deal with the design of a treatment program of dynamic treatment regimens (DTRs) for patients with depression post acute coronary syndrome. We introduce a novel adaptive randomization scheme for a sequential multiple assignment randomized trial of DTRs. Our approach adapts the randomization probabilities to favor treatment sequences having comparatively superior Q-functions used in Q-learning. The proposed approach addresses three main concerns of an implementation study: it allows incorporation of historical data or opinions, it includes randomization for learning purposes, and it aims to improve care via adaptation throughout the program. We demonstrate how to apply our method to design a depression treatment program using data from a previous study. By simulation, we illustrate that the inputs from historical data are important for the program performance measured by the expected outcomes of the enrollees, but also show that the adaptive randomization scheme is able to compensate poorly specified historical inputs by improving patient outcomes within a reasonable horizon. The simulation results also confirm that the proposed design allows efficient learning of the treatments by alleviating the curse of dimensionality. PMID:25354029

  5. Frequency Domain Computer Programs for Prediction and Analysis of Rail Vehicle Dynamics : Volume 2. Appendixes

    DOT National Transportation Integrated Search

    1975-12-01

    Frequency domain computer programs developed or acquired by TSC for the analysis of rail vehicle dynamics are described in two volumes. Volume 2 contains program listings including subroutines for the four TSC frequency domain programs described in V...

  6. 78 FR 34089 - Revision of a Currently Approved Information Collection for the Energy Efficiency and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-06

    ... Approved Information Collection for the Energy Efficiency and Conservation Block Grant Program Status... guidance concerning the Energy Efficiency and Conservation Block Grant (EECBG) Program is available for... Conservation Block Grant (EECBG) Program Status Report''; (3) Type of Review: Revision of currently approved...

  7. Organizational determinants of efficiency and effectiveness in mental health partial care programs.

    PubMed Central

    Schinnar, A P; Kamis-Gould, E; Delucia, N; Rothbard, A B

    1990-01-01

    The use of partial care as a treatment modality for mentally ill patients, particularly the chronically mentally ill, has greatly increased. However, research into what constitutes a "good" program has been scant. This article reports on an evaluation study of staff productivity, cost efficiency, and service effectiveness of adult partial care programs carried out in New Jersey in fiscal year 1984/1985. Five program performance indexes are developed based on comparisons of multiple measures of resources, service activities, and client outcomes. These are used to test various hypotheses regarding the effect of organizational and fiscal variables on partial care program efficiency and effectiveness. The four issues explored are: auspices, organizational complexity, service mix, and fiscal control by the state. These were found to explain about half of the variance in program performance. In addition, partial care programs demonstrating midlevel performance with regard to productivity and efficiency were observed to be the most effective, implying a possible optimal level of efficiency at which effectiveness is maximized. PMID:2113046

  8. Dynamic ocean management increases the efficiency and efficacy of fisheries management.

    PubMed

    Dunn, Daniel C; Maxwell, Sara M; Boustany, Andre M; Halpin, Patrick N

    2016-01-19

    In response to the inherent dynamic nature of the oceans and continuing difficulty in managing ecosystem impacts of fisheries, interest in the concept of dynamic ocean management, or real-time management of ocean resources, has accelerated in the last several years. However, scientists have yet to quantitatively assess the efficiency of dynamic management over static management. Of particular interest is how scale influences effectiveness, both in terms of how it reflects underlying ecological processes and how this relates to potential efficiency gains. Here, we address the empirical evidence gap and further the ecological theory underpinning dynamic management. We illustrate, through the simulation of closures across a range of spatiotemporal scales, that dynamic ocean management can address previously intractable problems at scales associated with coactive and social patterns (e.g., competition, predation, niche partitioning, parasitism, and social aggregations). Furthermore, it can significantly improve the efficiency of management: as the resolution of the closures used increases (i.e., as the closures become more targeted), the percentage of target catch forgone or displaced decreases, the reduction ratio (bycatch/catch) increases, and the total time-area required to achieve the desired bycatch reduction decreases. In the scenario examined, coarser scale management measures (annual time-area closures and monthly full-fishery closures) would displace up to four to five times the target catch and require 100-200 times more square kilometer-days of closure than dynamic measures (grid-based closures and move-on rules). To achieve similar reductions in juvenile bycatch, the fishery would forgo or displace between USD 15-52 million in landings using a static approach over a dynamic management approach.

  9. Dynamic Ocean Management Increases the Efficiency and Efficacy of Fisheries Management

    NASA Astrophysics Data System (ADS)

    Dunn, D. C.; Maxwell, S.; Boustany, A. M.; Halpin, P. N.

    2016-12-01

    In response to the inherent dynamic nature of the oceans and continuing difficulty in managing ecosystem impacts of fisheries, interest in the concept of dynamic ocean management, or real-time management of ocean resources, has accelerated in the last several years. However, scientists have yet to quantitatively assess the efficiency of dynamic management over static management. Of particular interest is how scale influences effectiveness, both in terms of how it reflects underlying ecological processes and how this relates to potential efficiency gains. In this presentation, we attempt to address both the empirical evidence gap and further the ecological theory underpinning dynamic management. We illustrate, through the simulation of closures across a range of spatiotemporal scales, that dynamic ocean management can address previously intractable problems at scales associated with coactive and social patterns (e.g., competition, predation, niche partitioning, parasitism and social aggregations). Further, it can significantly improve the efficiency of management: as the resolution of the individual closures used increases (i.e., as the closures become more targeted) the percent of target catch forgone or displaced decreases, the reduction ratio (bycatch/catch) increases, and the total time-area required to achieve the desired bycatch reduction decreases. The coarser management measures (annual time-area closures and monthly full fishery closures) affected up to 4-5x the target catch and required 100-200x the time-area of the dynamic measures (grid-based closures and move-on rules). To achieve similar reductions in juvenile bycatch, the fishery would forgo or displace between USD 15-52 million in landings using a static approach over a dynamic management approach.

  10. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  11. Hamilton-Jacobi-Bellman equations and approximate dynamic programming on time scales.

    PubMed

    Seiffertt, John; Sanyal, Suman; Wunsch, Donald C

    2008-08-01

    The time scales calculus is a key emerging area of mathematics due to its potential use in a wide variety of multidisciplinary applications. We extend this calculus to approximate dynamic programming (ADP). The core backward induction algorithm of dynamic programming is extended from its traditional discrete case to all isolated time scales. Hamilton-Jacobi-Bellman equations, the solution of which is the fundamental problem in the field of dynamic programming, are motivated and proven on time scales. By drawing together the calculus of time scales and the applied area of stochastic control via ADP, we have connected two major fields of research.

  12. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    PubMed

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  13. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    PubMed Central

    Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-01-01

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019

  14. Dynamic graph cuts for efficient inference in Markov Random Fields.

    PubMed

    Kohli, Pushmeet; Torr, Philip H S

    2007-12-01

    Abstract-In this paper we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP solutions for certain dynamically changing MRF models in computer vision such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, the dynamic algorithm efficiently computes the maximum flow in a modified version of the graph. The time taken by it is roughly proportional to the total amount of change in the edge weights of the graph. Our experiments show that, when the number of changes in the graph is small, the dynamic algorithm is significantly faster than the best known static graph cut algorithm. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video. It should be noted that the application of our algorithm is not limited to the above problem, the algorithm is generic and can be used to yield similar improvements in many other cases that involve dynamic change.

  15. A new version of Scilab software package for the study of dynamical systems

    NASA Astrophysics Data System (ADS)

    Bordeianu, C. C.; Felea, D.; Beşliu, C.; Jipa, Al.; Grossu, I. V.

    2009-11-01

    This work presents a new version of a software package for the study of chaotic flows, maps and fractals [1]. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well-known examples are implemented, with the capability of the users inserting their own ODE or iterative equations. New version program summaryProgram title: Chaos v2.0 Catalogue identifier: AEAP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1275 No. of bytes in distributed program, including test data, etc.: 7135 Distribution format: tar.gz Programming language: Scilab 5.1.1. Scilab 5.1.1 should be installed before running the program. Information about the installation can be found at http://wiki.scilab.org/howto/install/windows. Computer: PC-compatible running Scilab on MS Windows or Linux Operating system: Windows XP, Linux RAM: below 150 Megabytes Classification: 6.2 Catalogue identifier of previous version: AEAP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 788 Does the new version supersede the previous version?: Yes Nature of problem: Any physical model containing linear or nonlinear ordinary differential equations (ODE). Solution method: Numerical solving of ordinary differential equations for the study of chaotic flows. The chaotic behavior of the nonlinear dynamical system is analyzed using Poincare sections, phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropies. Numerical solving of iterative equations for the study of maps and fractals. Reasons for new version: The program has been updated to use the new version 5.1.1 of Scilab with new graphical capabilities [2]. Moreover, new use cases have been added which make the handling of the program easier and more efficient. Summary of revisions: A new use case concerning coupled predator-prey models has been added [3]. Three new use cases concerning fractals (Sierpinsky gasket, Barnsley's Fern and Tree) have been added [3]. The graphical user interface (GUI) of the program has been reconstructed to include the new use cases. The program has been updated to use Scilab 5.1.1 with the new graphical capabilities. Additional comments: The program package contains 12 subprograms. interface.sce - the graphical user interface (GUI) that permits the choice of a routine as follows 1.sci - Lorenz dynamical system 2.sci - Chua dynamical system 3.sci - Rosler dynamical system 4.sci - Henon map 5.sci - Lyapunov exponents for Lorenz dynamical system 6.sci - Lyapunov exponent for the logistic map 7.sci - Shannon entropy for the logistic map 8.sci - Coupled predator-prey model 1f.sci - Sierpinsky gasket 2f.sci - Barnsley's Fern 3f.sci - Barnsley's Tree Running time: 10 to 20 seconds for problems that do not involve Lyapunov exponents calculation; 60 to 1000 seconds for problems that involve high orders ODE, Lyapunov exponents calculation and fractals. References: C.C. Bordeianu, C. Besliu, Al. Jipa, D. Felea, I. V. Grossu, Comput. Phys. Comm. 178 (2008) 788. S. Campbell, J.P. Chancelier, R. Nikoukhah, Modeling and Simulation in Scilab/Scicos, Springer, 2006. R.H. Landau, M.J. Paez, C.C. Bordeianu, A Survey of Computational Physics, Introductory Computational Science, Princeton University Press, 2008.

  16. Evaluation of programs to improve complementary feeding in infants and young children.

    PubMed

    Frongillo, Edward A

    2017-10-01

    Evaluation of complementary feeding programs is needed to enhance knowledge on what works, to document responsible use of resources, and for advocacy. Evaluation is done during program conceptualization and design, implementation, and determination of effectiveness. This paper explains the role of evaluation in the advancement of complementary feeding programs, presenting concepts and methods and illustrating them through examples. Planning and investments for evaluations should occur from the beginning of the project life cycle. Essential to evaluation is articulation of a program theory on how change would occur and what program actions are required for change. Analysis of program impact pathways makes explicit the dynamic connections in the program theory and accounts for contextual factors that could influence program effectiveness. Evaluating implementation functioning is done through addressing questions about needs, coverage, provision, and utilization using information obtained from process evaluation, operations research, and monitoring. Evaluating effectiveness is done through assessing impact, efficiency, coverage, process, and causality. Plausibility designs ask whether the program seemed to have an effect above and beyond external influences, often using a nonrandomized control group and baseline and end line measures. Probability designs ask whether there was an effect using a randomized control group. Evaluations may not be able to use randomization, particularly for programs implemented at a large scale. Plausibility designs, innovative designs, or innovative combinations of designs sometimes are best able to provide useful information. Further work is needed to develop practical designs for evaluation of large-scale country programs on complementary feeding. © 2017 John Wiley & Sons Ltd.

  17. Quiet, Efficient Fans for Spaceflight: An Overview of NASA's Technology Development Plan

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle

    2010-01-01

    A Technology Development Plan to improve the aerodynamic and acoustic performance of spaceflight fans has been submitted to NASA s Exploration Technology Development Program. The plan describes a research program intended to make broader use of the technology developed at NASA Glenn to increase the efficiency and reduce the noise of aircraft engine fans. The goal is to develop a set of well-characterized government-owned fans nominally suited for spacecraft ventilation and cooling systems. NASA s Exploration Life Support community will identify design point conditions for the fans in this study. Computational Fluid Dynamics codes will be used in the design and analysis process. The fans will be built and used in a series of tests. Data from aerodynamic and acoustic performance tests will be used to validate performance predictions. These performance maps will also be entered into a database to help spaceflight fan system developers make informed design choices. Velocity measurements downstream of fan rotor blades and stator vanes will also be collected and used for code validation. Details of the fan design, analysis, and testing will be publicly reported. With access to fan geometry and test data, the small fan industry can independently evaluate design and analysis methods and work towards improvement.

  18. Energy Efficiency Finance Programs: Use Case Analysis to Define Data Needs and Guidelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Peter; Larsen, Peter; Kramer, Chris

    There are over 200 energy efficiency loan programs—across 49 U.S. states—administered by utilities, state/local government agencies, or private lenders.1 This distributed model has led to significant variation in program design and implementation practices including how data is collected and used. The challenge of consolidating and aggregating data across independently administered programs has been illustrated by a recent pilot of an open source database for energy efficiency financing program data. This project was led by the Environmental Defense Fund (EDF), the Investor Confidence Project, the Clean Energy Finance Center (CEFC), and the University of Chicago. This partnership discussed data collection practicesmore » with a number of existing energy efficiency loan programs and identified four programs that were suitable and willing to participate in the pilot database (Diamond 2014).2 The partnership collected information related to ~12,000 loans with an aggregate value of ~$100M across the four programs. Of the 95 data fields collected across the four programs, 30 fields were common between two or more programs and only seven data fields were common across all programs. The results of that pilot study illustrate the inconsistencies in current data definition and collection practices among energy efficiency finance programs and may contribute to certain barriers.« less

  19. AACSD: An atomistic analyzer for crystal structure and defects

    NASA Astrophysics Data System (ADS)

    Liu, Z. R.; Zhang, R. F.

    2018-01-01

    We have developed an efficient command-line program named AACSD (Atomistic Analyzer for Crystal Structure and Defects) for the post-analysis of atomic configurations generated by various atomistic simulation codes. The program has implemented not only the traditional filter methods like the excess potential energy (EPE), the centrosymmetry parameter (CSP), the common neighbor analysis (CNA), the common neighborhood parameter (CNP), the bond angle analysis (BAA), and the neighbor distance analysis (NDA), but also the newly developed ones including the modified centrosymmetry parameter (m-CSP), the orientation imaging map (OIM) and the local crystallographic orientation (LCO). The newly proposed OIM and LCO methods have been extended for all three crystal structures including face centered cubic, body centered cubic and hexagonal close packed. More specially, AACSD can be easily used for the atomistic analysis of metallic nanocomposite with each phase to be analyzed independently, which provides a unique pathway to capture their dynamic evolution of various defects on the fly. In this paper, we provide not only a throughout overview on various theoretical methods and their implementation into AACSD program, but some critical evaluations, specific testing and applications, demonstrating the capability of the program on each functionality.

  20. ColoNav: patient navigation for colorectal cancer screening in deprived areas - Study protocol.

    PubMed

    Allary, C; Bourmaud, A; Tinquaut, F; Oriol, M; Kalecinski, J; Dutertre, V; Lechopier, N; Pommier, M; Benoist, Y; Rousseau, S; Regnier, V; Buthion, V; Chauvin, F

    2016-07-07

    The mass colorectal cancer screening program was implemented in 2008 in France, targeting 16 million French people aged between 50 and 74. The current adhesion is insufficient and the participation rate is even lower among the underserved population, increasing health inequalities within our health care system. Patient Navigation programs have proved their efficiency to promote the access to cancer screening and diagnosis. The purpose of the study is to assess the implementation of a patient navigation intervention that has been described in another cultural environment and another health care system. The main objective of the program is to increase the colorectal cancer screening participation rate among the deprived population through the intervention of a navigator to promote the Fecal Occult Blood Test (FOBT) and complementary exams. We performed a multisite cluster randomized controlled trial, with three groups (one experimental group and two control groups) for 18 months. The study attempts to give a better understanding of the adhesion barriers to colorectal cancer screening among underserved populations. If this project is cost-effective, it could create a dynamic based on peer approaches that could be developed for other cancer screening programs and other chronic diseases. NCT02369757.

  1. A program for the calculation of paraboloidal-dish solar thermal power plant performance

    NASA Technical Reports Server (NTRS)

    Bowyer, J. M., Jr.

    1985-01-01

    A program capable of calculating the design-point and quasi-steady-state annual performance of a paraboloidal-concentrator solar thermal power plant without energy storage was written for a programmable calculator equipped with suitable printer. The power plant may be located at any site for which a histogram of annual direct normal insolation is available. Inputs required by the program are aperture area and the design and annual efficiencies of the concentrator; the intercept factor and apparent efficiency of the power conversion subsystem and a polynomial representation of its normalized part-load efficiency; the efficiency of the electrical generator or alternator; the efficiency of the electric power conditioning and transport subsystem; and the fractional parasitic loses for the plant. Losses to auxiliaries associated with each individual module are to be deducted when the power conversion subsystem efficiencies are calculated. Outputs provided by the program are the system design efficiency, the annualized receiver efficiency, the annualized power conversion subsystem efficiency, total annual direct normal insolation received per unit area of concentrator aperture, and the system annual efficiency.

  2. A Simulation Program for Dynamic Infrared (IR) Spectra

    ERIC Educational Resources Information Center

    Zoerb, Matthew C.; Harris, Charles B.

    2013-01-01

    A free program for the simulation of dynamic infrared (IR) spectra is presented. The program simulates the spectrum of two exchanging IR peaks based on simple input parameters. Larger systems can be simulated with minor modifications. The program is available as an executable program for PCs or can be run in MATLAB on any operating system. Source…

  3. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  4. Refractive Secondary Concentrators for Solar Thermal Applications

    NASA Technical Reports Server (NTRS)

    Wong, Wayne A.; Macosko, Robert P.

    1999-01-01

    The NASA Glenn Research Center is developing technologies that utilize solar energy for various space applications including electrical power conversion, thermal propulsion, and furnaces. Common to all of these applications is the need for highly efficient, solar concentration systems. An effort is underway to develop the innovative single crystal refractive secondary concentrator, which uses refraction and total internal reflection to efficiently concentrate and direct solar energy. The refractive secondary offers very high throughput efficiencies (greater than 90%), and when used in combination with advanced primary concentrators, enables very high concentration ratios (10,0(X) to 1) and very high temperatures (greater than 2000 K). Presented is an overview of the refractive secondary concentrator development effort at the NASA Glenn Research Center, including optical design and analysis techniques, thermal modeling capabilities, crystal materials characterization testing, optical coatings evaluation, and component testing. Also presented is a discussion of potential future activity and technical issues yet to be resolved. Much of the work performed to date has been in support of the NASA Marshall Space Flight Center's Solar Thermal Propulsion Program. The many benefits of a refractive secondary concentrator that enable efficient, high temperature thermal propulsion system designs, apply equally well to other solar applications including furnaces and power generation systems such as solar dynamics, concentrated thermal photovoltaics, and thermionics.

  5. DPOI: Distributed software system development platform for ocean information service

    NASA Astrophysics Data System (ADS)

    Guo, Zhongwen; Hu, Keyong; Jiang, Yongguo; Sun, Zhaosui

    2015-02-01

    Ocean information management is of great importance as it has been employed in many areas of ocean science and technology. However, the developments of Ocean Information Systems (OISs) often suffer from low efficiency because of repetitive work and continuous modifications caused by dynamic requirements. In this paper, the basic requirements of OISs are analyzed first, and then a novel platform DPOI is proposed to improve development efficiency and enhance software quality of OISs by providing off-the-shelf resources. In the platform, the OIS is decomposed hierarchically into a set of modules, which can be reused in different system developments. These modules include the acquisition middleware and data loader that collect data from instruments and files respectively, the database that stores data consistently, the components that support fast application generation, the web services that make the data from distributed sources syntactical by use of predefined schemas and the configuration toolkit that enables software customization. With the assistance of the development platform, the software development needs no programming and the development procedure is thus accelerated greatly. We have applied the development platform in practical developments and evaluated its efficiency in several development practices and different development approaches. The results show that DPOI significantly improves development efficiency and software quality.

  6. Development and application of a crossbreeding simulation model for goat production systems in tropical regions.

    PubMed

    Tsukahara, Y; Oishi, K; Hirooka, H

    2011-12-01

    A deterministic simulation model was developed to estimate biological production efficiency and to evaluate goat crossbreeding systems under tropical conditions. The model involves 5 production systems: pure indigenous, first filial generations (F1), backcross (BC), composite breeds of F1 (CMP(F1)), and BC (CMP(BC)). The model first simulates growth, reproduction, lactation, and energy intakes of a doe and a kid on a 1-d time step at the individual level and thereafter the outputs are integrated into the herd dynamics program. The ability of the model to simulate individual performances was tested under a base situation. The simulation results represented daily BW changes, ME requirements, and milk yield and the estimates were within the range of published data. Two conventional goat production scenarios (an intensive milk production scenario and an integrated goat and oil palm production scenario) in Malaysia were examined. The simulation results of the intensive milk production scenario showed the greater production efficiency of the CMP(BC) and CMP(F1) systems and decreased production efficiency of the F1 and BC systems. The results of the integrated goat and oil palm production scenario showed that the production efficiency and stocking rate were greater for the indigenous goats than for the crossbreeding systems.

  7. The Dynamics of Language Program Direction. Issues in Language Program Direction: A Series of Annual Volumes.

    ERIC Educational Resources Information Center

    Benseler, David P., Ed.

    This collection papers begins with "Introduction: The Dynamics of Successful Leadership in Foreign Language Programs," then features the following: "The Undergraduate Program: Autonomy and Empowerment" (Wilga M. Rivers); "TA Supervision: Are We Preparing a Future Professoriate?" (Cathy Pons); "Applied Scholarship…

  8. Dynamic optimization of metabolic networks coupled with gene expression.

    PubMed

    Waldherr, Steffen; Oyarzún, Diego A; Bockmayr, Alexander

    2015-01-21

    The regulation of metabolic activity by tuning enzyme expression levels is crucial to sustain cellular growth in changing environments. Metabolic networks are often studied at steady state using constraint-based models and optimization techniques. However, metabolic adaptations driven by changes in gene expression cannot be analyzed by steady state models, as these do not account for temporal changes in biomass composition. Here we present a dynamic optimization framework that integrates the metabolic network with the dynamics of biomass production and composition. An approximation by a timescale separation leads to a coupled model of quasi-steady state constraints on the metabolic reactions, and differential equations for the substrate concentrations and biomass composition. We propose a dynamic optimization approach to determine reaction fluxes for this model, explicitly taking into account enzyme production costs and enzymatic capacity. In contrast to the established dynamic flux balance analysis, our approach allows predicting dynamic changes in both the metabolic fluxes and the biomass composition during metabolic adaptations. Discretization of the optimization problems leads to a linear program that can be efficiently solved. We applied our algorithm in two case studies: a minimal nutrient uptake network, and an abstraction of core metabolic processes in bacteria. In the minimal model, we show that the optimized uptake rates reproduce the empirical Monod growth for bacterial cultures. For the network of core metabolic processes, the dynamic optimization algorithm predicted commonly observed metabolic adaptations, such as a diauxic switch with a preference ranking for different nutrients, re-utilization of waste products after depletion of the original substrate, and metabolic adaptation to an impending nutrient depletion. These examples illustrate how dynamic adaptations of enzyme expression can be predicted solely from an optimization principle. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Improving Efficiency of Passive RFID Tag Anti-Collision Protocol Using Dynamic Frame Adjustment and Optimal Splitting.

    PubMed

    Memon, Muhammad Qasim; He, Jingsha; Yasir, Mirza Ammar; Memon, Aasma

    2018-04-12

    Radio frequency identification is a wireless communication technology, which enables data gathering and identifies recognition from any tagged object. The number of collisions produced during wireless communication would lead to a variety of problems including unwanted number of iterations and reader-induced idle slots, computational complexity in terms of estimation as well as recognition of the number of tags. In this work, dynamic frame adjustment and optimal splitting are employed together in the proposed algorithm. In the dynamic frame adjustment method, the length of frames is based on the quantity of tags to yield optimal efficiency. The optimal splitting method is conceived with smaller duration of idle slots using an optimal value for splitting level M o p t , where (M > 2), to vary slot sizes to get the minimal identification time for the idle slots. The application of the proposed algorithm offers the advantages of not going for the cumbersome estimation of the quantity of tags incurred and the size (number) of tags has no effect on its performance efficiency. Our experiment results show that using the proposed algorithm, the efficiency curve remains constant as the number of tags varies from 50 to 450, resulting in an overall theoretical gain in the efficiency of 0.032 compared to system efficiency of 0.441 and thus outperforming both dynamic binary tree slotted ALOHA (DBTSA) and binary splitting protocols.

  10. New developments in water efficiency

    NASA Astrophysics Data System (ADS)

    Gregg, Tony T.; Dewees, Amanda; Gross, Drema; Hoffman, Bill; Strub, Dan; Watson, Matt

    2006-10-01

    An overview of significant new developments in water efficiency is presented in this paper. The areas covered will be legislative, regulatory, new programs or program wrinkles, new products, and new studies on the effectiveness of conservation programs. Examples include state and local level efficiency regulations in Texas; the final results of the national submetering study for apartments in the US; the US effort to adopt the IWA protocols for leak detection; new water efficient commercial products such as ET irrigation controllers, new models of efficient clothes washers, and innovative toilet designs.

  11. The NASA Aircraft Energy Efficiency Program

    NASA Technical Reports Server (NTRS)

    Klineberg, J. M.

    1978-01-01

    The objective of the NASA Aircraft Energy Efficiency Program is to accelerate the development of advanced technology for more energy-efficient subsonic transport aircraft. This program will have application to current transport derivatives in the early 1980s and to all-new aircraft of the late 1980s and early 1990s. Six major technology projects were defined that could result in fuel savings in commercial aircraft: (1) Engine Component Improvement, (2) Energy Efficient Engine, (3) Advanced Turboprops, (4) Energy Efficiency Transport (aerodynamically speaking), (5) Laminar Flow Control, and (6) Composite Primary Structures.

  12. Global Dynamic Modeling of Space-Geodetic Data

    NASA Technical Reports Server (NTRS)

    Bird, Peter

    1995-01-01

    The proposal had outlined a year for program conversion, a year for testing and debugging, and two years for numerical experiments. We kept to that schedule. In first (partial) year, author designed a finite element for isostatic thin-shell deformation on a sphere, derived all of its algebraic and stiffness properties, and embedded it in a new finite element code which derives its basic solution strategy (and some critical subroutines) from earlier flat-Earth codes. Also designed and programmed a new fault element to represent faults along plate boundaries. Wrote a preliminary version of a spherical graphics program for the display of output. Tested this new code for accuracy on individual model plates. Made estimates of the computer-time/cost efficiency of the code for whole-earth grids, which were reasonable. Finally, converted an interactive graphical grid-designer program from Cartesian to spherical geometry to permit the beginning of serious modeling. For reasons of cost efficiency, models are isostatic, and do not consider the local effects of unsupported loads or bending stresses. The requirements are: (1) ability to represent rigid rotation on a sphere; (2) ability to represent a spatially uniform strain-rate tensor in the limit of small elements; and (3) continuity of velocity across all element boundaries. Author designed a 3-node triangle shell element which has two different sets of basis functions to represent (vector) velocity and all other (scalar) variables. Such elements can be shown to converge to the formulas for plane triangles in the limit of small size, but can also applied to cover any area smaller than a hemisphere. The difficult volume integrals involved in computing the stiffness of such elements are performed numerically using 7 Gauss integration points on the surface of the sphere, beneath each of which a vertical integral is performed using about 100 points.

  13. QoS support for end users of I/O-intensive applications using shared storage systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2011-01-19

    I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less

  14. Programming PHREEQC calculations with C++ and Python a comparative study

    USGS Publications Warehouse

    Charlton, Scott R.; Parkhurst, David L.; Muller, Mike

    2011-01-01

    The new IPhreeqc module provides an application programming interface (API) to facilitate coupling of other codes with the U.S. Geological Survey geochemical model PHREEQC. Traditionally, loose coupling of PHREEQC with other applications required methods to create PHREEQC input files, start external PHREEQC processes, and process PHREEQC output files. IPhreeqc eliminates most of this effort by providing direct access to PHREEQC capabilities through a component object model (COM), a library, or a dynamically linked library (DLL). Input and calculations can be specified through internally programmed strings, and all data exchange between an application and the module can occur in computer memory. This study compares simulations programmed in C++ and Python that are tightly coupled with IPhreeqc modules to the traditional simulations that are loosely coupled to PHREEQC. The study compares performance, quantifies effort, and evaluates lines of code and the complexity of the design. The comparisons show that IPhreeqc offers a more powerful and simpler approach for incorporating PHREEQC calculations into transport models and other applications that need to perform PHREEQC calculations. The IPhreeqc module facilitates the design of coupled applications and significantly reduces run times. Even a moderate knowledge of one of the supported programming languages allows more efficient use of PHREEQC than the traditional loosely coupled approach.

  15. 77 FR 31756 - Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-30

    ...-AC46 Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating Methods: Public Meeting AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy... regulations authorizing the use of alternative methods of determining energy efficiency or energy consumption...

  16. 75 FR 32177 - Energy Efficiency Program for Consumer Products: Commonwealth of Massachusetts Petition for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-07

    ... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket Number EERE-BT-PET-0024] Energy Efficiency Program for Consumer Products: Commonwealth of Massachusetts Petition for Exemption From Federal Preemption of Massachusetts' Energy Efficiency Standard for Residential Non...

  17. MCPB.py: A Python Based Metal Center Parameter Builder.

    PubMed

    Li, Pengfei; Merz, Kenneth M

    2016-04-25

    MCPB.py, a python based metal center parameter builder, has been developed to build force fields for the simulation of metal complexes employing the bonded model approach. It has an optimized code structure, with far fewer required steps than the previous developed MCPB program. It supports various AMBER force fields and more than 80 metal ions. A series of parametrization schemes to derive force constants and charge parameters are available within the program. We give two examples (one metalloprotein example and one organometallic compound example), indicating the program's ability to build reliable force fields for different metal ion containing complexes. The original version was released with AmberTools15. It is provided via the GNU General Public License v3.0 (GNU_GPL_v3) agreement and is free to download and distribute. MCPB.py provides a bridge between quantum mechanical calculations and molecular dynamics simulation software packages thereby enabling the modeling of metal ion centers. It offers an entry into simulating metal ions in a number of situations by providing an efficient way for researchers to handle the vagaries and difficulties associated with metal ion modeling.

  18. Optimal Energy Consumption Analysis of Natural Gas Pipeline

    PubMed Central

    Liu, Enbin; Li, Changjun; Yang, Yi

    2014-01-01

    There are many compressor stations along long-distance natural gas pipelines. Natural gas can be transported using different boot programs and import pressures, combined with temperature control parameters. Moreover, different transport methods have correspondingly different energy consumptions. At present, the operating parameters of many pipelines are determined empirically by dispatchers, resulting in high energy consumption. This practice does not abide by energy reduction policies. Therefore, based on a full understanding of the actual needs of pipeline companies, we introduce production unit consumption indicators to establish an objective function for achieving the goal of lowering energy consumption. By using a dynamic programming method for solving the model and preparing calculation software, we can ensure that the solution process is quick and efficient. Using established optimization methods, we analyzed the energy savings for the XQ gas pipeline. By optimizing the boot program, the import station pressure, and the temperature parameters, we achieved the optimal energy consumption. By comparison with the measured energy consumption, the pipeline now has the potential to reduce energy consumption by 11 to 16 percent. PMID:24955410

  19. Loci-STREAM Version 0.9

    NASA Technical Reports Server (NTRS)

    Wright, Jeffrey; Thakur, Siddharth

    2006-01-01

    Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.

  20. Dynamic Cytology and Transcriptional Regulation of Rice Lamina Joint Development1[OPEN

    PubMed Central

    2017-01-01

    Rice (Oryza sativa) leaf angle is determined by lamina joint and is an important agricultural trait determining leaf erectness and, hence, the photosynthesis efficiency and grain yield. Genetic studies reveal a complex regulatory network of lamina joint development; however, the morphological changes, cytological transitions, and underlying transcriptional programming remain to be elucidated. A systemic morphological and cytological study reveals a dynamic developmental process and suggests a common but distinct regulation of the lamina joint. Successive and sequential cell division and expansion, cell wall thickening, and programmed cell death at the adaxial or abaxial sides form the cytological basis of the lamina joint, and the increased leaf angle results from the asymmetric cell proliferation and elongation. Analysis of the gene expression profiles at four distinct developmental stages ranging from initiation to senescence showed that genes related to cell division and growth, hormone synthesis and signaling, transcription (transcription factors), and protein phosphorylation (protein kinases) exhibit distinct spatiotemporal patterns during lamina joint development. Phytohormones play crucial roles by promoting cell differentiation and growth at early stages or regulating the maturation and senescence at later stages, which is consistent with the quantitative analysis of hormones at different stages. Further comparison with the gene expression profile of leaf inclination1, a mutant with decreased auxin and increased leaf angle, indicates the coordinated effects of hormones in regulating lamina joint. These results reveal a dynamic cytology of rice lamina joint that is fine-regulated by multiple factors, providing informative clues for illustrating the regulatory mechanisms of leaf angle and plant architecture. PMID:28500269

Top