Sample records for concurrency control algorithm

  1. Concurrence control for transactions with priorities

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith

    1989-01-01

    Priority inversion occurs when a process is delayed by the actions of another process with less priority. With atomic transactions, the concurrency control mechanism can cause delays, and without taking priorities into account can be a source of priority inversion. Three traditional concurrency control algorithms are extended so that they are free from unbounded priority inversion.

  2. Concurrency control for transactions with priorities

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith

    1989-01-01

    Priority inversion occurs when a process is delayed by the actions of another process with less priority. With atomic transations, the concurrency control mechanism can cause delays, and without taking priorities into account can be a source of priority inversion. In this paper, three traditional concurrency control algorithms are extended so that they are free from unbounded priority inversion.

  3. Distributed Database Control and Allocation. Volume 2. Performance Analysis of Concurrency Control Algorithms.

    DTIC Science & Technology

    1983-10-01

    Concurrency Control Algorithms Computer Corporation of America Wente K. Lin, Philip A. Bernstein, Nathan Goodman and Jerry Nolte APPROVED FOR PUBLIC ...84 03 IZ 004 ’KV This report has been reviewed by the RADC Public Affairs Office (PA) an is releasable to the National Technical Information Service...NTIS). At NTIS it will be releasable to the general public , including foreign na~ions. RADC-TR-83-226, Vol II (of three) has been reviewed and is

  4. Issues in Real-Time Data Management.

    DTIC Science & Technology

    1991-07-01

    2. Multiversion concurrency control [5] interprets write operations as the creation of new ver- sions of the items (in contrast to the update-in...features of optimistic (deferred writing, celayed selection of serialization order) and multiversion concurrency control. They do not present any...34 Multiversion Concurrency Control - Theory and Algorithms". ACM Transactions on Database Systems 8, 4 (December 1983), 465-484. 6. Buchman, A. P

  5. Petri net model for analysis of concurrently processed complex algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  6. Improving generalized inverted index lock wait times

    NASA Astrophysics Data System (ADS)

    Borodin, A.; Mirvoda, S.; Porshnev, S.; Ponomareva, O.

    2018-01-01

    Concurrent operations on tree like data structures is a cornerstone of any database system. Concurrent operations intended for improving read\\write performance and usually implemented via some way of locking. Deadlock-free methods of concurrency control are known as tree locking protocols. These protocols provide basic operations(verbs) and algorithm (ways of operation invocations) for applying it to any tree-like data structure. These algorithms operate on data, managed by storage engine which are very different among RDBMS implementations. In this paper, we discuss tree locking protocol implementation for General inverted index (Gin) applied to multiversion concurrency control (MVCC) storage engine inside PostgreSQL RDBMS. After that we introduce improvements to locking protocol and provide usage statistics about evaluation of our improvement in very high load environment in one of the world’s largest IT company.

  7. Generalized concurrence in boson sampling.

    PubMed

    Chin, Seungbeom; Huh, Joonsuk

    2018-04-17

    A fundamental question in linear optical quantum computing is to understand the origin of the quantum supremacy in the physical system. It is found that the multimode linear optical transition amplitudes are calculated through the permanents of transition operator matrices, which is a hard problem for classical simulations (boson sampling problem). We can understand this problem by considering a quantum measure that directly determines the runtime for computing the transition amplitudes. In this paper, we suggest a quantum measure named "Fock state concurrence sum" C S , which is the summation over all the members of "the generalized Fock state concurrence" (a measure analogous to the generalized concurrences of entanglement and coherence). By introducing generalized algorithms for computing the transition amplitudes of the Fock state boson sampling with an arbitrary number of photons per mode, we show that the minimal classical runtime for all the known algorithms directly depends on C S . Therefore, we can state that the Fock state concurrence sum C S behaves as a collective measure that controls the computational complexity of Fock state BS. We expect that our observation on the role of the Fock state concurrence in the generalized algorithm for permanents would provide a unified viewpoint to interpret the quantum computing power of linear optics.

  8. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  9. Method for concurrent execution of primitive operations by dynamically assigning operations based upon computational marked graph and availability of data

    NASA Technical Reports Server (NTRS)

    Mielke, Roland V. (Inventor); Stoughton, John W. (Inventor)

    1990-01-01

    Computationally complex primitive operations of an algorithm are executed concurrently in a plurality of functional units under the control of an assignment manager. The algorithm is preferably defined as a computationally marked graph contianing data status edges (paths) corresponding to each of the data flow edges. The assignment manager assigns primitive operations to the functional units and monitors completion of the primitive operations to determine data availability using the computational marked graph of the algorithm. All data accessing of the primitive operations is performed by the functional units independently of the assignment manager.

  10. Modeling and implementation of concurrent logic controllers with use of Petri nets, LSMs, and sequent calculus

    NASA Astrophysics Data System (ADS)

    Tkacz, J.; Bukowiec, A.; Doligalski, M.

    2017-08-01

    The paper presentes the method of modeling and implementation of concurrent controllers. Concurrent controllers are specified by Petri nets. Then Petri nets are decomposed using symbolic deduction method of analysis. Formal methods like sequent calculus system with considered elements of Thelen's algorithm have been used here. As a result, linked state machines (LSMs) are received. Each FSM is implemented using methods of structural decomposition during process of logic synthesis. The method of multiple encoding of microinstruction has been applied. It leads to decreased number of Boolean function realized by combinational part of FSM. The additional decoder could be implemented with the use of memory blocks.

  11. NASA Workshop on Computational Structural Mechanics 1987, part 3

    NASA Technical Reports Server (NTRS)

    Sykes, Nancy P. (Editor)

    1989-01-01

    Computational Structural Mechanics (CSM) topics are explored. Algorithms and software for nonlinear structural dynamics, concurrent algorithms for transient finite element analysis, computational methods and software systems for dynamics and control of large space structures, and the use of multi-grid for structural analysis are discussed.

  12. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  13. Retrospective Cost Adaptive Control with Concurrent Closed-Loop Identification

    NASA Astrophysics Data System (ADS)

    Sobolic, Frantisek M.

    Retrospective cost adaptive control (RCAC) is a discrete-time direct adaptive control algorithm for stabilization, command following, and disturbance rejection. RCAC is known to work on systems given minimal modeling information which is the leading numerator coefficient and any nonminimum-phase (NMP) zeros of the plant transfer function. This information is normally needed a priori and is key in the development of the filter, also known as the target model, within the retrospective performance variable. A novel approach to alleviate the need for prior modeling of both the leading coefficient of the plant transfer function as well as any NMP zeros is developed. The extension to the RCAC algorithm is the use of concurrent optimization of both the target model and the controller coefficients. Concurrent optimization of the target model and controller coefficients is a quadratic optimization problem in the target model and controller coefficients separately. However, this optimization problem is not convex as a joint function of both variables, and therefore nonconvex optimization methods are needed. Finally, insights within RCAC that include intercalated injection between the controller numerator and the denominator, unveil the workings of RCAC fitting a specific closed-loop transfer function to the target model. We exploit this interpretation by investigating several closed-loop identification architectures in order to extract this information for use in the target model.

  14. Concurrent design of an RTP chamber and advanced control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spence, P.; Schaper, C.; Kermani, A.

    1995-12-31

    A concurrent-engineering approach is applied to the development of an axisymmetric rapid-thermal-processing (RTP) reactor and its associated temperature controller. Using a detailed finite-element thermal model as a surrogate for actual hardware, the authors have developed and tested a multi-input multi-output (MIMO) controller. Closed-loop simulations are performed by linking the control algorithm with the finite-element code. Simulations show that good temperature uniformity is maintained on the wafer during both steady and transient conditions. A numerical study shows the effect of ramp rate, feedback gain, sensor placement, and wafer-emissivity patterns on system performance.

  15. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.

  16. High-throughput state-machine replication using software transactional memory.

    PubMed

    Zhao, Wenbing; Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin

    2016-11-01

    State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload.

  17. High-throughput state-machine replication using software transactional memory

    PubMed Central

    Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin

    2017-01-01

    State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload. PMID:29075049

  18. Suboptimal Scheduling in Switched Systems With Continuous-Time Dynamics: A Least Squares Approach.

    PubMed

    Sardarmehni, Tohid; Heydari, Ali

    2018-06-01

    Two approximate solutions for optimal control of switched systems with autonomous subsystems and continuous-time dynamics are presented. The first solution formulates a policy iteration (PI) algorithm for the switched systems with recursive least squares. To reduce the computational burden imposed by the PI algorithm, a second solution, called single loop PI, is presented. Online and concurrent training algorithms are discussed for implementing each solution. At last, effectiveness of the presented algorithms is evaluated through numerical simulations.

  19. Distributed Database Control and Allocation. Volume 1. Frameworks for Understanding Concurrency Control and Recovery Algorithms.

    DTIC Science & Technology

    1983-10-01

    an Aborti , It forwards the operation directly to the recovery system. When the recovery system acknowledges that the operation has been processed, the...list... AbortI . rite Ti Into the abort list. Then undo all of Ti’s writes by reedina their bet ore-images from the audit trail and writin. them back...Into the stable database. [Ack) Then, delete Ti from the active list. Restart. Process Aborti for each Ti on the active list. Ack) In this algorithm

  20. Fast-kick-off monotonically convergent algorithm for searching optimal control fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel

    2011-09-15

    This Rapid Communication presents a fast-kick-off search algorithm for quickly finding optimal control fields in the state-to-state transition probability control problems, especially those with poorly chosen initial control fields. The algorithm is based on a recently formulated monotonically convergent scheme [T.-S. Ho and H. Rabitz, Phys. Rev. E 82, 026703 (2010)]. Specifically, the local temporal refinement of the control field at each iteration is weighted by a fractional inverse power of the instantaneous overlap of the backward-propagating wave function, associated with the target state and the control field from the previous iteration, and the forward-propagating wave function, associated with themore » initial state and the concurrently refining control field. Extensive numerical simulations for controls of vibrational transitions and ultrafast electron tunneling show that the new algorithm not only greatly improves the search efficiency but also is able to attain good monotonic convergence quality when further frequency constraints are required. The algorithm is particularly effective when the corresponding control dynamics involves a large number of energy levels or ultrashort control pulses.« less

  1. Computational complexities and storage requirements of some Riccati equation solvers

    NASA Technical Reports Server (NTRS)

    Utku, Senol; Garba, John A.; Ramesh, A. V.

    1989-01-01

    The linear optimal control problem of an nth-order time-invariant dynamic system with a quadratic performance functional is usually solved by the Hamilton-Jacobi approach. This leads to the solution of the differential matrix Riccati equation with a terminal condition. The bulk of the computation for the optimal control problem is related to the solution of this equation. There are various algorithms in the literature for solving the matrix Riccati equation. However, computational complexities and storage requirements as a function of numbers of state variables, control variables, and sensors are not available for all these algorithms. In this work, the computational complexities and storage requirements for some of these algorithms are given. These expressions show the immensity of the computational requirements of the algorithms in solving the Riccati equation for large-order systems such as the control of highly flexible space structures. The expressions are also needed to compute the speedup and efficiency of any implementation of these algorithms on concurrent machines.

  2. Finite elements and the method of conjugate gradients on a concurrent processor

    NASA Technical Reports Server (NTRS)

    Lyzenga, G. A.; Raefsky, A.; Hager, G. H.

    1985-01-01

    An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.

  3. Finite elements and the method of conjugate gradients on a concurrent processor

    NASA Technical Reports Server (NTRS)

    Lyzenga, G. A.; Raefsky, A.; Hager, B. H.

    1984-01-01

    An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90% for sufficiently large problems.

  4. Implementation of a partitioned algorithm for simulation of large CSI problems

    NASA Technical Reports Server (NTRS)

    Alvin, Kenneth F.; Park, K. C.

    1991-01-01

    The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.

  5. An Adaptive Tradeoff Algorithm for Multi-issue SLA Negotiation

    NASA Astrophysics Data System (ADS)

    Son, Seokho; Sim, Kwang Mong

    Since participants in a Cloud may be independent bodies, mechanisms are necessary for resolving different preferences in leasing Cloud services. Whereas there are currently mechanisms that support service-level agreement negotiation, there is little or no negotiation support for concurrent price and timeslot for Cloud service reservations. For the concurrent price and timeslot negotiation, a tradeoff algorithm to generate and evaluate a proposal which consists of price and timeslot proposal is necessary. The contribution of this work is thus to design an adaptive tradeoff algorithm for multi-issue negotiation mechanism. The tradeoff algorithm referred to as "adaptive burst mode" is especially designed to increase negotiation speed and total utility and to reduce computational load by adaptively generating concurrent set of proposals. The empirical results obtained from simulations carried out using a testbed suggest that due to the concurrent price and timeslot negotiation mechanism with adaptive tradeoff algorithm: 1) both agents achieve the best performance in terms of negotiation speed and utility; 2) the number of evaluations of each proposal is comparatively lower than previous scheme (burst-N).

  6. Control Synthesis for a Class of Hybrid Systems Subject to Configuration-Based Safety Constraints

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Lin, Feng; Meyer, George

    1997-01-01

    We examine a class of hybrid systems which we call Composite Hybrid Machines (CHM's) that consists of the concurrent (and partially synchronized) operation of Elementary Hybrid Machines (EHM's). Legal behavior, specified by a set of illegal configurations that the CHM may not enter, is to be achieved by the concurrent operation of the CHM with a suitably designed legal controller. In the present paper we focus on the problem of synthesizing a legal controller, whenever such a controller exists. More specifically, we address the problem of synthesizing the minimally restrictive legal controller. A controller is minimally restrictive if, when composed to operate concurrently with another legal controller, it will never interfere with the operation of the other controller and, therefore, can be composed to operate concurrently with any other controller that may be designed to achieve liveness specifications or optimality requirements without the need to reinvestigate or reverify legality of the composite controller. We confine our attention to a special class of CHM's where system dynamics is rate-limited and legal guards are conjunctions or disjunctions of atomic formulas in the dynamic variables (of the type x less than or equal to x(sub 0), or x greater than or equal to x(sub 0)). We present an algorithm for synthesis of the minimally restrictive legal controller. We demonstrate our approach by synthesizing a minimally restrictive controller for a steam boiler (the verification of which recently received a great deal of attention).

  7. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machnes, S.; Institute for Theoretical Physics, University of Ulm, D-89069 Ulm; Sander, U.

    2011-08-15

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions aremore » pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.« less

  8. Control architecture for an adaptive electronically steerable flash lidar and associated instruments

    NASA Astrophysics Data System (ADS)

    Ruppert, Lyle; Craner, Jeremy; Harris, Timothy

    2014-09-01

    An Electronically Steerable Flash Lidar (ESFL), developed by Ball Aerospace & Technologies Corporation, allows realtime adaptive control of configuration and data-collection strategy based on recent or concurrent observations and changing situations. This paper reviews, at a high level, some of the algorithms and control architecture built into ESFL. Using ESFL as an example, it also discusses the merits and utility such adaptable instruments in Earth-system studies.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmagarmid, A.K.

    The availability of distributed data bases is directly affected by the timely detection and resolution of deadlocks. Consequently, mechanisms are needed to make deadlock detection algorithms resilient to failures. Presented first is a centralized algorithm that allows transactions to have multiple requests outstanding. Next, a new distributed deadlock detection algorithm (DDDA) is presented, using a global detector (GD) to detect global deadlocks and local detectors (LDs) to detect local deadlocks. This algorithm essentially identifies transaction-resource interactions that m cause global (multisite) deadlocks. Third, a deadlock detection algorithm utilizing a transaction-wait-for (TWF) graph is presented. It is a fully disjoint algorithmmore » that allows multiple outstanding requests. The proposed algorithm can achieve improved overall performance by using multiple disjoint controllers coupled with the two-phase property while maintaining the simplicity of centralized schemes. Fourth, an algorithm that combines deadlock detection and avoidance is given. This algorithm uses concurrent transaction controllers and resource coordinators to achieve maximum distribution. The language of CSP is used to describe this algorithm. Finally, two efficient deadlock resolution protocols are given along with some guidelines to be used in choosing a transaction for abortion.« less

  10. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  11. The use of a prescription drug monitoring program to develop algorithms to identify providers with unusual prescribing practices for controlled substances.

    PubMed

    Ringwalt, Christopher; Schiro, Sharon; Shanahan, Meghan; Proescholdbell, Scott; Meder, Harold; Austin, Anna; Sachdeva, Nidhi

    2015-10-01

    The misuse, abuse and diversion of controlled substances have reached epidemic proportion in the United States. Contributing to this problem are providers who over-prescribe these substances. Using one state's prescription drug monitoring program, we describe a series of metrics we developed to identify providers manifesting unusual and uncustomary prescribing practices. We then present the results of a preliminary effort to assess the concurrent validity of these algorithms, using death records from the state's vital records database pertaining to providers who wrote prescriptions to patients who then died of a medication or drug overdose within 30 days. Metrics manifesting the strongest concurrent validity with providers identified from these records related to those who co-prescribed benzodiazepines (e.g., valium) and high levels of opioid analgesics (e.g., oxycodone), as well as those who wrote temporally overlapping prescriptions. We conclude with a discussion of a variety of uses to which these metrics may be put, as well as problems and opportunities related to their use.

  12. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  13. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  14. Methodologies and systems for heterogeneous concurrent computing

    NASA Technical Reports Server (NTRS)

    Sunderam, V. S.

    1994-01-01

    Heterogeneous concurrent computing is gaining increasing acceptance as an alternative or complementary paradigm to multiprocessor-based parallel processing as well as to conventional supercomputing. While algorithmic and programming aspects of heterogeneous concurrent computing are similar to their parallel processing counterparts, system issues, partitioning and scheduling, and performance aspects are significantly different. In this paper, we discuss critical design and implementation issues in heterogeneous concurrent computing, and describe techniques for enhancing its effectiveness. In particular, we highlight the system level infrastructures that are required, aspects of parallel algorithm development that most affect performance, system capabilities and limitations, and tools and methodologies for effective computing in heterogeneous networked environments. We also present recent developments and experiences in the context of the PVM system and comment on ongoing and future work.

  15. Evaluation of concurrent priority queue algorithms. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Q.

    1991-02-01

    The priority queue is a fundamental data structure that is used in a large variety of parallel algorithms, such as multiprocessor scheduling and parallel best-first search of state-space graphs. This thesis addresses the design and experimental evaluation of two novel concurrent priority queues: a parallel Fibonacci heap and a concurrent priority pool, and compares them with the concurrent binary heap. The parallel Fibonacci heap is based on the sequential Fibonacci heap, which is theoretically the most efficient data structure for sequential priority queues. This scheme not only preserves the efficient operation time bounds of its sequential counterpart, but also hasmore » very low contention by distributing locks over the entire data structure. The experimental results show its linearly scalable throughput and speedup up to as many processors as tested (currently 18). A concurrent access scheme for a doubly linked list is described as part of the implementation of the parallel Fibonacci heap. The concurrent priority pool is based on the concurrent B-tree and the concurrent pool. The concurrent priority pool has the highest throughput among the priority queues studied. Like the parallel Fibonacci heap, the concurrent priority pool scales linearly up to as many processors as tested. The priority queues are evaluated in terms of throughput and speedup. Some applications of concurrent priority queues such as the vertex cover problem and the single source shortest path problem are tested.« less

  16. The Caltech Concurrent Computation Program - Project description

    NASA Technical Reports Server (NTRS)

    Fox, G.; Otto, S.; Lyzenga, G.; Rogstad, D.

    1985-01-01

    The Caltech Concurrent Computation Program wwhich studies basic issues in computational science is described. The research builds on initial work where novel concurrent hardware, the necessary systems software to use it and twenty significant scientific implementations running on the initial 32, 64, and 128 node hypercube machines have been constructed. A major goal of the program will be to extend this work into new disciplines and more complex algorithms including general packages that decompose arbitrary problems in major application areas. New high-performance concurrent processors with up to 1024-nodes, over a gigabyte of memory and multigigaflop performance are being constructed. The implementations cover a wide range of problems in areas such as high energy and astrophysics, condensed matter, chemical reactions, plasma physics, applied mathematics, geophysics, simulation, CAD for VLSI, graphics and image processing. The products of the research program include the concurrent algorithms, hardware, systems software, and complete program implementations.

  17. Parallel Algorithms and Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  18. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  19. An Adaptive Flow Solver for Air-Borne Vehicles Undergoing Time-Dependent Motions/Deformations

    NASA Technical Reports Server (NTRS)

    Singh, Jatinder; Taylor, Stephen

    1997-01-01

    This report describes a concurrent Euler flow solver for flows around complex 3-D bodies. The solver is based on a cell-centered finite volume methodology on 3-D unstructured tetrahedral grids. In this algorithm, spatial discretization for the inviscid convective term is accomplished using an upwind scheme. A localized reconstruction is done for flow variables which is second order accurate. Evolution in time is accomplished using an explicit three-stage Runge-Kutta method which has second order temporal accuracy. This is adapted for concurrent execution using another proven methodology based on concurrent graph abstraction. This solver operates on heterogeneous network architectures. These architectures may include a broad variety of UNIX workstations and PCs running Windows NT, symmetric multiprocessors and distributed-memory multi-computers. The unstructured grid is generated using commercial grid generation tools. The grid is automatically partitioned using a concurrent algorithm based on heat diffusion. This results in memory requirements that are inversely proportional to the number of processors. The solver uses automatic granularity control and resource management techniques both to balance load and communication requirements, and deal with differing memory constraints. These ideas are again based on heat diffusion. Results are subsequently combined for visualization and analysis using commercial CFD tools. Flow simulation results are demonstrated for a constant section wing at subsonic, transonic, and a supersonic case. These results are compared with experimental data and numerical results of other researchers. Performance results are under way for a variety of network topologies.

  20. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.

    1993-01-01

    The key elements in the 1992-93 period of the project are the following: (1) extensive use of the simulator to implement and test - concurrency control algorithms, interactive user interface, and replica control algorithms; and (2) investigations into the applicability of data and process replication in real-time systems. In the 1993-94 period of the project, we intend to accomplish the following: (1) concentrate on efforts to investigate the effects of data and process replication on hard and soft real-time systems - especially we will concentrate on the impact of semantic-based consistency control schemes on a distributed real-time system in terms of improved reliability, improved availability, better resource utilization, and reduced missed task deadlines; and (2) use the prototype to verify the theoretically predicted performance of locking protocols, etc.

  1. Concurrent design of composite materials and structures considering thermal conductivity constraints

    NASA Astrophysics Data System (ADS)

    Jia, J.; Cheng, W.; Long, K.

    2017-08-01

    This article introduces thermal conductivity constraints into concurrent design. The influence of thermal conductivity on macrostructure and orthotropic composite material is extensively investigated using the minimum mean compliance as the objective function. To simultaneously control the amounts of different phase materials, a given mass fraction is applied in the optimization algorithm. Two phase materials are assumed to compete with each other to be distributed during the process of maximizing stiffness and thermal conductivity when the mass fraction constraint is small, where phase 1 has superior stiffness and thermal conductivity whereas phase 2 has a superior ratio of stiffness to density. The effective properties of the material microstructure are computed by a numerical homogenization technique, in which the effective elasticity matrix is applied to macrostructural analyses and the effective thermal conductivity matrix is applied to the thermal conductivity constraint. To validate the effectiveness of the proposed optimization algorithm, several three-dimensional illustrative examples are provided and the features under different boundary conditions are analysed.

  2. Control algorithm implementation for a redundant degree of freedom manipulator

    NASA Technical Reports Server (NTRS)

    Cohan, Steve

    1991-01-01

    This project's purpose is to develop and implement control algorithms for a kinematically redundant robotic manipulator. The manipulator is being developed concurrently by Odetics Inc., under internal research and development funding. This SBIR contract supports algorithm conception, development, and simulation, as well as software implementation and integration with the manipulator hardware. The Odetics Dexterous Manipulator is a lightweight, high strength, modular manipulator being developed for space and commercial applications. It has seven fully active degrees of freedom, is electrically powered, and is fully operational in 1 G. The manipulator consists of five self-contained modules. These modules join via simple quick-disconnect couplings and self-mating connectors which allow rapid assembly/disassembly for reconfiguration, transport, or servicing. Each joint incorporates a unique drive train design which provides zero backlash operation, is insensitive to wear, and is single fault tolerant to motor or servo amplifier failure. The sensing system is also designed to be single fault tolerant. Although the initial prototype is not space qualified, the design is well-suited to meeting space qualification requirements. The control algorithm design approach is to develop a hierarchical system with well defined access and interfaces at each level. The high level endpoint/configuration control algorithm transforms manipulator endpoint position/orientation commands to joint angle commands, providing task space motion. At the same time, the kinematic redundancy is resolved by controlling the configuration (pose) of the manipulator, using several different optimizing criteria. The center level of the hierarchy servos the joints to their commanded trajectories using both linear feedback and model-based nonlinear control techniques. The lowest control level uses sensed joint torque to close torque servo loops, with the goal of improving the manipulator dynamic behavior. The control algorithms are subjected to a dynamic simulation before implementation.

  3. A survey on the design of multiprocessing systems for artificial intelligence applications

    NASA Technical Reports Server (NTRS)

    Wah, Benjamin W.; Li, Guo Jie

    1989-01-01

    Some issues in designing computers for artificial intelligence (AI) processing are discussed. These issues are divided into three levels: the representation level, the control level, and the processor level. The representation level deals with the knowledge and methods used to solve the problem and the means to represent it. The control level is concerned with the detection of dependencies and parallelism in the algorithmic and program representations of the problem, and with the synchronization and sheduling of concurrent tasks. The processor level addresses the hardware and architectural components needed to evaluate the algorithmic and program representations. Solutions for the problems of each level are illustrated by a number of representative systems. Design decisions in existing projects on AI computers are classed into top-down, bottom-up, and middle-out approaches.

  4. Analysis and design of algorithm-based fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. Sukumaran

    1990-01-01

    An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.

  5. Detection of Intracranial Signatures of Interictal Epileptiform Discharges from Concurrent Scalp EEG.

    PubMed

    Spyrou, Loukianos; Martín-Lopez, David; Valentín, Antonio; Alarcón, Gonzalo; Sanei, Saeid

    2016-06-01

    Interictal epileptiform discharges (IEDs) are transient neural electrical activities that occur in the brain of patients with epilepsy. A problem with the inspection of IEDs from the scalp electroencephalogram (sEEG) is that for a subset of epileptic patients, there are no visually discernible IEDs on the scalp, rendering the above procedures ineffective, both for detection purposes and algorithm evaluation. On the other hand, intracranially placed electrodes yield a much higher incidence of visible IEDs as compared to concurrent scalp electrodes. In this work, we utilize concurrent scalp and intracranial EEG (iEEG) from a group of temporal lobe epilepsy (TLE) patients with low number of scalp-visible IEDs. The aim is to determine whether by considering the timing information of the IEDs from iEEG, the resulting concurrent sEEG contains enough information for the IEDs to be reliably distinguished from non-IED segments. We develop an automatic detection algorithm which is tested in a leave-subject-out fashion, where each test subject's detection algorithm is based on the other patients' data. The algorithm obtained a [Formula: see text] accuracy in recognizing scalp IED from non-IED segments with [Formula: see text] accuracy when trained and tested on the same subject. Also, it was able to identify nonscalp-visible IED events for most patients with a low number of false positive detections. Our results represent a proof of concept that IED information for TLE patients is contained in scalp EEG even if they are not visually identifiable and also that between subject differences in the IED topology and shape are small enough such that a generic algorithm can be used.

  6. Advanced Motor Control Test Facility for NASA GRC Flywheel Energy Storage System Technology Development Unit

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Kascak, Peter E.; Hofmann, Heath; Mackin, Michael; Santiago, Walter; Jansen, Ralph

    2001-01-01

    This paper describes the flywheel test facility developed at the NASA Glenn Research Center with particular emphasis on the motor drive components and control. A four-pole permanent magnet synchronous machine, suspended on magnetic bearings, is controlled with a field orientation algorithm. A discussion of the estimation of the rotor position and speed from a "once around signal" is given. The elimination of small dc currents by using a concurrent stationary frame current regulator is discussed and demonstrated. Initial experimental results are presented showing the successful operation and control of the unit at speeds up to 20,000 rpm.

  7. RACER: Effective Race Detection Using AspectJ

    NASA Technical Reports Server (NTRS)

    Bodden, Eric; Havelund, Klaus

    2008-01-01

    The limits of coding with joint constraints on detected and undetected error rates Programming errors occur frequently in large software systems, and even more so if these systems are concurrent. In the past, researchers have developed specialized programs to aid programmers detecting concurrent programming errors such as deadlocks, livelocks, starvation and data races. In this work we propose a language extension to the aspect-oriented programming language AspectJ, in the form of three new built-in pointcuts, lock(), unlock() and may be Shared(), which allow programmers to monitor program events where locks are granted or handed back, and where values are accessed that may be shared amongst multiple Java threads. We decide thread-locality using a static thread-local objects analysis developed by others. Using the three new primitive pointcuts, researchers can directly implement efficient monitoring algorithms to detect concurrent programming errors online. As an example, we expose a new algorithm which we call RACER, an adoption of the well-known ERASER algorithm to the memory model of Java. We implemented the new pointcuts as an extension to the Aspect Bench Compiler, implemented the RACER algorithm using this language extension and then applied the algorithm to the NASA K9 Rover Executive. Our experiments proved our implementation very effective. In the Rover Executive RACER finds 70 data races. Only one of these races was previously known.We further applied the algorithm to two other multi-threaded programs written by Computer Science researchers, in which we found races as well.

  8. Mechatronics by Analogy and Application to Legged Locomotion

    NASA Astrophysics Data System (ADS)

    Ragusila, Victor

    A new design methodology for mechatronic systems, dubbed as Mechatronics by Analogy (MbA), is introduced and applied to designing a leg mechanism. The new methodology argues that by establishing a similarity relation between a complex system and a number of simpler models it is possible to design the former using the analysis and synthesis means developed for the latter. The methodology provides a framework for concurrent engineering of complex systems while maintaining the transparency of the system behaviour through making formal analogies between the system and those with more tractable dynamics. The application of the MbA methodology to the design of a monopod robot leg, called the Linkage Leg, is also studied. A series of simulations show that the dynamic behaviour of the Linkage Leg is similar to that of a combination of a double pendulum and a spring-loaded inverted pendulum, based on which the system kinematic, dynamic, and control parameters can be designed concurrently. The first stage of Mechatronics by Analogy is a method of extracting significant features of system dynamics through simpler models. The goal is to determine a set of simpler mechanisms with similar dynamic behaviour to that of the original system in various phases of its motion. A modular bond-graph representation of the system is determined, and subsequently simplified using two simplification algorithms. The first algorithm determines the relevant dynamic elements of the system for each phase of motion, and the second algorithm finds the simple mechanism described by the remaining dynamic elements. In addition to greatly simplifying the controller for the system, using simpler mechanisms with similar behaviour provides a greater insight into the dynamics of the system. This is seen in the second stage of the new methodology, which concurrently optimizes the simpler mechanisms together with a control system based on their dynamics. Once the optimal configuration of the simpler system is determined, the original mechanism is optimized such that its dynamic behaviour is analogous. It is shown that, if this analogy is achieved, the control system designed based on the simpler mechanisms can be directly implemented to the more complex system, and their dynamic behaviours are close enough for the system performance to be effectively the same. Finally it is shown that, for the employed objective of fast legged locomotion, the proposed methodology achieves a better design than Reduction-by-Feedback, a competing methodology that uses control layers to simplify the dynamics of the system.

  9. Proceedings of USC (University of Southern California) Workshop on VLSI (Very Large Scale Integration) & Modern Signal Processing, held at Los Angeles, California on 1-3 November 1982

    DTIC Science & Technology

    1983-11-15

    Concurrent Algorithms", A. Cremers , Dortmund University, West Germany, and T. Hibbard, JPL, Pasadena, CA 64 "An Overview of Signal Representations in...n O f\\ n O P- A -> Problem-oriented specification of concurrent algorithms Armin B. Cremers and Thomas N. Hibbard Preliminary version September...1982 s* Armin B. Cremers Computer Science Department University of Dortmund P.O. Box 50 05 00 D-4600 Dortmund 50 Fed. Rep. Germany

  10. Modeling and optimum time performance for concurrent processing

    NASA Technical Reports Server (NTRS)

    Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy

    1988-01-01

    The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.

  11. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  12. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  13. Model for the design of distributed data bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ram, S.

    This research focuses on developing a model to solve the File Allocation Problem (FAP). The model integrates two major design issues, namely Concurrently Control and Data Distribution. The central node locking mechanism is incorporated in developing a nonlinear integer programming model. Two solution algorithms are proposed, one of which was implemented in FORTRAN.V. The allocation of data bases and programs are examined using this heuristic. Several decision rules were also formulated based on the results of the heuristic. A second more comprehensive heuristic was proposed, based on the knapsack problem. The development and implementation of this algorithm has been leftmore » as a topic for future research.« less

  14. The Raid distributed database system

    NASA Technical Reports Server (NTRS)

    Bhargava, Bharat; Riedl, John

    1989-01-01

    Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.

  15. Concurrent approach for evolving compact decision rule sets

    NASA Astrophysics Data System (ADS)

    Marmelstein, Robert E.; Hammack, Lonnie P.; Lamont, Gary B.

    1999-02-01

    The induction of decision rules from data is important to many disciplines, including artificial intelligence and pattern recognition. To improve the state of the art in this area, we introduced the genetic rule and classifier construction environment (GRaCCE). It was previously shown that GRaCCE consistently evolved decision rule sets from data, which were significantly more compact than those produced by other methods (such as decision tree algorithms). The primary disadvantage of GRaCCe, however, is its relatively poor run-time execution performance. In this paper, a concurrent version of the GRaCCE architecture is introduced, which improves the efficiency of the original algorithm. A prototype of the algorithm is tested on an in- house parallel processor configuration and the results are discussed.

  16. Concurrent computation of attribute filters on shared memory parallel machines.

    PubMed

    Wilkinson, Michael H F; Gao, Hui; Hesselink, Wim H; Jonker, Jan-Eppo; Meijster, Arnold

    2008-10-01

    Morphological attribute filters have not previously been parallelized, mainly because they are both global and non-separable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings and thickenings, based on Salembier's Max-Trees and Min-trees. The image or volume is first partitioned in multiple slices. We then compute the Max-trees of each slice using any sequential Max-Tree algorithm. Subsequently, the Max-trees of the slices can be merged to obtain the Max-tree of the image. A C-implementation yielded good speed-ups on both a 16-processor MIPS 14000 parallel machine, and a dual-core Opteron-based machine. It is shown that the speed-up of the parallel algorithm is a direct measure of the gain with respect to the sequential algorithm used. Furthermore, the concurrent algorithm shows a speed gain of up to 72 percent on a single-core processor, due to reduced cache thrashing.

  17. AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading

    NASA Astrophysics Data System (ADS)

    Leggett, Charles; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; van Gemmeren, Peter; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; ATLAS Collaboration

    2017-10-01

    ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying handling of features such as event and time dependent data, asynchronous callbacks, metadata, integration with the online High Level Trigger for partial processing in certain regions of interest, concurrent I/O, as well as ensuring thread safety of core services. We also report on upgrading the framework to handle Algorithms that are fully re-entrant.

  18. Accelerated Dimension-Independent Adaptive Metropolis

    DOE PAGES

    Chen, Yuxin; Keyes, David E.; Law, Kody J.; ...

    2016-10-27

    This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less

  19. Accelerated Dimension-Independent Adaptive Metropolis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuxin; Keyes, David E.; Law, Kody J.

    This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less

  20. Reliability and concurrent validity of the Microsoft Xbox One Kinect for assessment of standing balance and postural control.

    PubMed

    Clark, Ross A; Pua, Yong-Hao; Oliveira, Cristino C; Bower, Kelly J; Thilarajah, Shamala; McGaw, Rebekah; Hasanki, Ksaniel; Mentiplay, Benjamin F

    2015-07-01

    The Microsoft Kinect V2 for Windows, also known as the Xbox One Kinect, includes new and potentially far improved depth and image sensors which may increase its accuracy for assessing postural control and balance. The aim of this study was to assess the concurrent validity and reliability of kinematic data recorded using a marker-based three dimensional motion analysis (3DMA) system and the Kinect V2 during a variety of static and dynamic balance assessments. Thirty healthy adults performed two sessions, separated by one week, consisting of static standing balance tests under different visual (eyes open vs. closed) and supportive (single limb vs. double limb) conditions, and dynamic balance tests consisting of forward and lateral reach and an assessment of limits of stability. Marker coordinate and joint angle data were concurrently recorded using the Kinect V2 skeletal tracking algorithm and the 3DMA system. Task-specific outcome measures from each system on Day 1 and 2 were compared. Concurrent validity of trunk angle data during the dynamic tasks and anterior-posterior range and path length in the static balance tasks was excellent (Pearson's r>0.75). In contrast, concurrent validity for medial-lateral range and path length was poor to modest for all trials except single leg eyes closed balance. Within device test-retest reliability was variable; however, the results were generally comparable between devices. In conclusion, the Kinect V2 has the potential to be used as a reliable and valid tool for the assessment of some aspects of balance performance. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Using Block-local Atomicity to Detect Stale-value Concurrency Errors

    NASA Technical Reports Server (NTRS)

    Artho, Cyrille; Havelund, Klaus; Biere, Armin

    2004-01-01

    Data races do not cover all kinds of concurrency errors. This paper presents a data-flow-based technique to find stale-value errors, which are not found by low-level and high-level data race algorithms. Stale values denote copies of shared data where the copy is no longer synchronized. The algorithm to detect such values works as a consistency check that does not require any assumptions or annotations of the program. It has been implemented as a static analysis in JNuke. The analysis is sound and requires only a single execution trace if implemented as a run-time checking algorithm. Being based on an analysis of Java bytecode, it encompasses the full program semantics, including arbitrarily complex expressions. Related techniques are more complex and more prone to over-reporting.

  2. The PlusCal Algorithm Language

    NASA Astrophysics Data System (ADS)

    Lamport, Leslie

    Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.

  3. Comparative study of popular objective functions for damping power system oscillations in multimachine system.

    PubMed

    Islam, Naz Niamul; Hannan, M A; Shareef, Hussain; Mohamed, Azah; Salam, M A

    2014-01-01

    Power oscillation damping controller is designed in linearized model with heuristic optimization techniques. Selection of the objective function is very crucial for damping controller design by optimization algorithms. In this research, comparative analysis has been carried out to evaluate the effectiveness of popular objective functions used in power system oscillation damping. Two-stage lead-lag damping controller by means of power system stabilizers is optimized using differential search algorithm for different objective functions. Linearized model simulations are performed to compare the dominant mode's performance and then the nonlinear model is continued to evaluate the damping performance over power system oscillations. All the simulations are conducted in two-area four-machine power system to bring a detailed analysis. Investigated results proved that multiobjective D-shaped function is an effective objective function in terms of moving unstable and lightly damped electromechanical modes into stable region. Thus, D-shape function ultimately improves overall system damping and concurrently enhances power system reliability.

  4. Fault Diagnosis System of Wind Turbine Generator Based on Petri Net

    NASA Astrophysics Data System (ADS)

    Zhang, Han

    Petri net is an important tool for discrete event dynamic systems modeling and analysis. And it has great ability to handle concurrent phenomena and non-deterministic phenomena. Currently Petri nets used in wind turbine fault diagnosis have not participated in the actual system. This article will combine the existing fuzzy Petri net algorithms; build wind turbine control system simulation based on Siemens S7-1200 PLC, while making matlab gui interface for migration of the system to different platforms.

  5. Neural signal processing and closed-loop control algorithm design for an implanted neural recording and stimulation system.

    PubMed

    Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N

    2015-08-01

    A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed to define brain network connectivity and neural network dynamics that vary at the individual patient level and vary over time.

  6. Statistics based sampling for controller and estimator design

    NASA Astrophysics Data System (ADS)

    Tenne, Dirk

    The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.

  7. Robust extraction of basis functions for simultaneous and proportional myoelectric control via sparse non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Lin, Chuang; Wang, Binghui; Jiang, Ning; Farina, Dario

    2018-04-01

    Objective. This paper proposes a novel simultaneous and proportional multiple degree of freedom (DOF) myoelectric control method for active prostheses. Approach. The approach is based on non-negative matrix factorization (NMF) of surface EMG signals with the inclusion of sparseness constraints. By applying a sparseness constraint to the control signal matrix, it is possible to extract the basis information from arbitrary movements (quasi-unsupervised approach) for multiple DOFs concurrently. Main Results. In online testing based on target hitting, able-bodied subjects reached a greater throughput (TP) when using sparse NMF (SNMF) than with classic NMF or with linear regression (LR). Accordingly, the completion time (CT) was shorter for SNMF than NMF or LR. The same observations were made in two patients with unilateral limb deficiencies. Significance. The addition of sparseness constraints to NMF allows for a quasi-unsupervised approach to myoelectric control with superior results with respect to previous methods for the simultaneous and proportional control of multi-DOF. The proposed factorization algorithm allows robust simultaneous and proportional control, is superior to previous supervised algorithms, and, because of minimal supervision, paves the way to online adaptation in myoelectric control.

  8. Direct adaptive control of a PUMA 560 industrial robot

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Lee, Thomas; Delpech, Michel

    1989-01-01

    The implementation and experimental validation of a new direct adaptive control scheme on a PUMA 560 industrial robot is described. The testbed facility consists of a Unimation PUMA 560 six-jointed robot and controller, and a DEC MicroVAX II computer which hosts the Robot Control C Library software. The control algorithm is implemented on the MicroVAX which acts as a digital controller for the PUMA robot, and the Unimation controller is effectively bypassed and used merely as an I/O device to interface the MicroVAX to the joint motors. The control algorithm for each robot joint consists of an auxiliary signal generated by a constant-gain Proportional plus Integral plus Derivative (PID) controller, and an adaptive position-velocity (PD) feedback controller with adjustable gains. The adaptive independent joint controllers compensate for the inter-joint couplings and achieve accurate trajectory tracking without the need for the complex dynamic model and parameter values of the robot. Extensive experimental results on PUMA joint control are presented to confirm the feasibility of the proposed scheme, in spite of strong interactions between joint motions. Experimental results validate the capabilities of the proposed control scheme. The control scheme is extremely simple and computationally very fast for concurrent processing with high sampling rates.

  9. Fully decentralized estimation and control for a modular wheeled mobile robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mutambara, A.G.O.; Durrant-Whyte, H.F.

    2000-06-01

    In this paper, the problem of fully decentralized data fusion and control for a modular wheeled mobile robot (WMR) is addressed. This is a vehicle system with nonlinear kinematics, distributed multiple sensors, and nonlinear sensor models. The problem is solved by applying fully decentralized estimation and control algorithms based on the extended information filter. This is achieved by deriving a modular, decentralized kinematic model by using plane motion kinematics to obtain the forward and inverse kinematics for a generalized simple wheeled vehicle. This model is then used in the decentralized estimation and control algorithms. WMR estimation and control is thusmore » obtained locally using reduced order models with reduced communication of information between nodes is carried out after every measurement (full rate communication), the estimates and control signals obtained at each node are equivalent to those obtained by a corresponding centralized system. Transputer architecture is used as the basis for hardware and software design as it supports the extensive communication and concurrency requirements that characterize modular and decentralized systems. The advantages of a modular WMR vehicle include scalability, application flexibility, low prototyping costs, and high reliability.« less

  10. ALGORITHM OF CARDIO COMPLEX DETECTION AND SORTING FOR PROCESSING THE DATA OF CONTINUOUS CARDIO SIGNAL MONITORING.

    PubMed

    Krasichkov, A S; Grigoriev, E B; Nifontov, E M; Shapovalov, V V

    The paper presents an algorithm of cardio complex classification as part of processing the data of continuous cardiac monitoring. R-wave detection concurrently with cardio complex sorting is discussed. The core of this approach is the use of prior information about. cardio complex forms, segmental structure, and degree of kindness. Results of the sorting algorithm testing are provided.

  11. Multiple concurrent recursive least squares identification with application to on-line spacecraft mass-property identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2006-01-01

    The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.

  12. Accuracy of a class of concurrent algorithms for transient finite element analysis

    NASA Technical Reports Server (NTRS)

    Ortiz, Michael; Sotelino, Elisa D.; Nour-Omid, Bahram

    1988-01-01

    The accuracy of a new class of concurrent procedures for transient finite element analysis is examined. A phase error analysis is carried out which shows that wave retardation leading to unacceptable loss of accuracy may occur if a Courant condition based on the dimensions of the subdomains is violated. Numerical tests suggest that this Courant condition is conservative for typical structural applications and may lead to a marked increase in accuracy as the number of subdomains is increased. Theoretical speed-up ratios are derived which suggest that the algorithms under consideration can be expected to exhibit a performance superior to that of globally implicit methods when implemented on parallel machines.

  13. An Elegant Sufficiency: Load-Aware Differentiated Scheduling of Data Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kettimuthu, Rajkumar; Vardoyan, Gayane; Agrawal, Gagan

    2015-11-15

    We investigate the file transfer scheduling problem, where transfers among different endpoints must be scheduled to maximize pertinent metrics. We propose two new algorithms that exploit the fact that the aggregate bandwidth obtained over a network or at a storage system tends to increase with the number of concurrent transfers—but only up to a certain limit. The first algorithm, SEAL, uses runtime information and data-driven models to approximate system load and adapt transfer schedules and concurrency so as to maximize performance while avoiding saturation. We implement this algorithm using GridFTP as the transfer protocol and evaluate it using real transfermore » logs in a production WAN environment. Results show that SEAL can improve average slowdowns and turnaround times by up to 25% and worst-case slowdown and turnaround times by up to 50%, compared with the best-performing baseline scheme. Our second algorithm, STEAL, further leverages user-supplied categorization of transfers as either “interactive” (requiring immediate processing) or “batch” (less time-critical). Results show that STEAL reduces the average slowdown of interactive transfers by 63% compared to the best-performing baseline and by 21% compared to SEAL. For batch transfers, compared to the best-performing baseline, STEAL improves by 18% the utilization of the bandwidth unused by interactive transfers. By elegantly ensuring a sufficient, but not excessive, allocation of concurrency to the right transfers, we significantly improve overall performance despite constraints.« less

  14. Coverability graphs for a class of synchronously executed unbounded Petri net

    NASA Technical Reports Server (NTRS)

    Stotts, P. David; Pratt, Terrence W.

    1990-01-01

    After detailing a variant of the concurrent-execution rule for firing of maximal subsets, in which the simultaneous firing of conflicting transitions is prohibited, an algorithm is constructed for generating the coverability graph of a net executed under this synchronous firing rule. The omega insertion criteria in the algorithm are shown to be valid for any net on which the algorithm terminates. It is accordingly shown that the set of nets on which the algorithm terminates includes the 'conflict-free' class.

  15. Archaeological field survey automation: concurrent multisensor site mapping and automated analysis

    NASA Astrophysics Data System (ADS)

    Józefowicz, Mateusz; Sokolov, Oleksandr; Meszyński, Sebastian; Siemińska, Dominika; Kołosowski, Przemysław

    2016-04-01

    ABM SE develops mobile robots (rovers) used for analog research of Mars exploration missions. The rovers are all-terrain exploration platforms, carrying third-party payloads: scientific instrumentation. "Wisdom" ground penetrating radar for Exomars mission has been tested onboard, as well as electrical resistivity module and other devices. Robot has operated in various environments, such as Central European countryside, Dachstein ice caves or Sahara, Morocco (controlled remotely via satellite from Toruń, Poland. Currently ABM SE works on local and global positioning system for a Mars rover basing on image and IMU data. This is performed under a project from ESA. In the next Mars rover missions a Mars GIS model will be build, including an acquired GPR profile, DEM and regular image data, integrated into a concurrent 3D terrain model. It is proposed to use similar approach in surveys of archaeological sites, especially those, where solid architecture remains can be expected at shallow depths or being partially exposed. It is possible to deploy a rover that will concurrently map a selected site with GPR, 2D and 3D cameras to create a site model. The rover image processing algorithms are capable of automatic tracing of distinctive features (such as exposed structure remains on a desert ground, differences in color of the ground, etc.) and to mark regularities on a created map. It is also possible to correlate the 3D map with an aerial photo taken under any angle to achieve interpretation synergy. Currently the algorithms are an interpretation aid and their results must be confirmed by a human. The advantages of a rover over traditional approaches, such as a manual cart or a drone include: a) long hours of continuous work or work in unfavorable environment, such as high desert, frozen water pools or large areas, b) concurrent multisensory data acquisition, c) working from the ground level enables capturing of sites obstructed from the air (trees), d) it is possible to control the platform from a remote location via satellite, with only servicing person on the site and the survey team operating from their office, globally. The method is under development. The team contributing to the project includes also: Oleksii Sokolov, Michał Koepke, Krzysztof Rydel, Michał Stypczyński, Maciej Ślęk, Łukasz Zapała, Michał Dąbrowski.

  16. Weather Radar Studies

    DTIC Science & Technology

    1988-03-31

    radar operation and data - collection activities, a large data -analysis effort has been under way in support of automatic wind-shear detection algorithm ...REDUCTION AND ALGORITHM DEVELOPMENT 49 A. General-Purpose Software 49 B. Concurrent Computer Systems 49 C. Sun Workstations 51 D. Radar Data Analysis 52...1. Algorithm Verification 52 2. Other Studies 53 3. Translations 54 4. Outside Distributions 55 E. Mesonet/LLWAS Data Analysis 55 1. 1985 Data 55 2

  17. A block-based algorithm for the solution of compressible flows in rotor-stator combinations

    NASA Technical Reports Server (NTRS)

    Akay, H. U.; Ecer, A.; Beskok, A.

    1990-01-01

    A block-based solution algorithm is developed for the solution of compressible flows in rotor-stator combinations. The method allows concurrent solution of multiple solution blocks in parallel machines. It also allows a time averaged interaction at the stator-rotor interfaces. Numerical results are presented to illustrate the performance of the algorithm. The effect of the interaction between the stator and rotor is evaluated.

  18. Partial Storage Optimization and Load Control Strategy of Cloud Data Centers

    PubMed Central

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444

  19. Partial storage optimization and load control strategy of cloud data centers.

    PubMed

    Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

  20. Design and Implementation of Parallel Algorithms

    DTIC Science & Technology

    1992-05-01

    Alon, N., Y. Azar, and Y. Ravid [1990]. "Universal sequences for complete graphs," SIAM J. Discrete Math 27. Alon, N., A. Bar-Noy, N. Linial, and D...SIAM J. Discrete Math .’ Klein, P., S. A. Plotkin, C. Stein, and E. Tardos [19911. "Faster approximation algorithms for the unit capacity concurrent

  1. Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw

    2000-01-01

    Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.

  2. 40 CFR 798.5460 - Rodent heritable translocation assays.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... treatment and control groups. (4) Control groups—(i) Concurrent controls. No concurrent positive or negative... control groups. Historical or concurrent controls shall be specified, as well as the randomization... SUBSTANCES CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5460 Rodent...

  3. 40 CFR 798.5460 - Rodent heritable translocation assays.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... treatment and control groups. (4) Control groups—(i) Concurrent controls. No concurrent positive or negative... control groups. Historical or concurrent controls shall be specified, as well as the randomization... SUBSTANCES CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5460 Rodent...

  4. 40 CFR 798.5460 - Rodent heritable translocation assays.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... treatment and control groups. (4) Control groups—(i) Concurrent controls. No concurrent positive or negative... control groups. Historical or concurrent controls shall be specified, as well as the randomization... SUBSTANCES CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5460 Rodent...

  5. 40 CFR 798.5460 - Rodent heritable translocation assays.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... treatment and control groups. (4) Control groups—(i) Concurrent controls. No concurrent positive or negative... control groups. Historical or concurrent controls shall be specified, as well as the randomization... SUBSTANCES CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5460 Rodent...

  6. 40 CFR 798.5460 - Rodent heritable translocation assays.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... treatment and control groups. (4) Control groups—(i) Concurrent controls. No concurrent positive or negative... control groups. Historical or concurrent controls shall be specified, as well as the randomization... SUBSTANCES CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5460 Rodent...

  7. FACET - a "Flexible Artifact Correction and Evaluation Toolbox" for concurrently recorded EEG/fMRI data.

    PubMed

    Glaser, Johann; Beisteiner, Roland; Bauer, Herbert; Fischmeister, Florian Ph S

    2013-11-09

    In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230-239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720-737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches.

  8. FACET – a “Flexible Artifact Correction and Evaluation Toolbox” for concurrently recorded EEG/fMRI data

    PubMed Central

    2013-01-01

    Background In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. Results FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230–239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720–737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. Conclusion The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches. PMID:24206927

  9. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  10. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    NASA Technical Reports Server (NTRS)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  11. Maintaining and Enhancing Diversity of Sampled Protein Conformations in Robotics-Inspired Methods.

    PubMed

    Abella, Jayvee R; Moll, Mark; Kavraki, Lydia E

    2018-01-01

    The ability to efficiently sample structurally diverse protein conformations allows one to gain a high-level view of a protein's energy landscape. Algorithms from robot motion planning have been used for conformational sampling, and several of these algorithms promote diversity by keeping track of "coverage" in conformational space based on the local sampling density. However, large proteins present special challenges. In particular, larger systems require running many concurrent instances of these algorithms, but these algorithms can quickly become memory intensive because they typically keep previously sampled conformations in memory to maintain coverage estimates. In addition, robotics-inspired algorithms depend on defining useful perturbation strategies for exploring the conformational space, which is a difficult task for large proteins because such systems are typically more constrained and exhibit complex motions. In this article, we introduce two methodologies for maintaining and enhancing diversity in robotics-inspired conformational sampling. The first method addresses algorithms based on coverage estimates and leverages the use of a low-dimensional projection to define a global coverage grid that maintains coverage across concurrent runs of sampling. The second method is an automatic definition of a perturbation strategy through readily available flexibility information derived from B-factors, secondary structure, and rigidity analysis. Our results show a significant increase in the diversity of the conformations sampled for proteins consisting of up to 500 residues when applied to a specific robotics-inspired algorithm for conformational sampling. The methodologies presented in this article may be vital components for the scalability of robotics-inspired approaches.

  12. A real-time expert system for self-repairing flight control

    NASA Technical Reports Server (NTRS)

    Gaither, S. A.; Agarwal, A. K.; Shah, S. C.; Duke, E. L.

    1989-01-01

    An integrated environment for specifying, prototyping, and implementing a self-repairing flight-control (SRFC) strategy is described. At an interactive workstation, the user can select paradigms such as rule-based expert systems, state-transition diagrams, and signal-flow graphs and hierarchically nest them, assign timing and priority attributes, establish blackboard-type communication, and specify concurrent execution on single or multiple processors. High-fidelity nonlinear simulations of aircraft and SRFC systems can be performed off-line, with the possibility of changing SRFC rules, inference strategies, and other heuristics to correct for control deficiencies. Finally, the off-line-generated SRFC can be transformed into highly optimized application-specific real-time C-language code. An application of this environment to the design of aircraft fault detection, isolation, and accommodation algorithms is presented in detail.

  13. Numerical algorithms for finite element computations on concurrent processors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    The work of several graduate students which relate to the NASA grant is briefly summarized. One student has worked on a detailed analysis of the so-called ijk forms of Gaussian elemination and Cholesky factorization on concurrent processors. Another student has worked on the vectorization of the incomplete Cholesky conjugate method on the CYBER 205. Two more students implemented various versions of Gaussian elimination and Cholesky factorization on the FLEX/32.

  14. Concurrent optimization of material spatial distribution and material anisotropy repartition for two-dimensional structures

    NASA Astrophysics Data System (ADS)

    Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris

    2018-04-01

    An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.

  15. Combinatorial Optimization by Amoeba-Based Neurocomputer with Chaotic Dynamics

    NASA Astrophysics Data System (ADS)

    Aono, Masashi; Hirata, Yoshito; Hara, Masahiko; Aihara, Kazuyuki

    We demonstrate a computing system based on an amoeba of a true slime mold Physarum capable of producing rich spatiotemporal oscillatory behavior. Our system operates as a neurocomputer because an optical feedback control in accordance with a recurrent neural network algorithm leads the amoeba's photosensitive branches to search for a stable configuration concurrently. We show our system's capability of solving the traveling salesman problem. Furthermore, we apply various types of nonlinear time series analysis to the amoeba's oscillatory behavior in the problem-solving process. The results suggest that an individual amoeba might be characterized as a set of coupled chaotic oscillators.

  16. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods

    NASA Astrophysics Data System (ADS)

    Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.

  17. A programmable two-qubit quantum processor in silicon.

    PubMed

    Watson, T F; Philips, S G J; Kawakami, E; Ward, D R; Scarlino, P; Veldhorst, M; Savage, D E; Lagally, M G; Friesen, Mark; Coppersmith, S N; Eriksson, M A; Vandersypen, L M K

    2018-03-29

    Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch-Josza algorithm and the Grover search algorithm-canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85-89 per cent and concurrences of 73-82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.

  18. Automatic and Robust Delineation of the Fiducial Points of the Seismocardiogram Signal for Non-invasive Estimation of Cardiac Time Intervals.

    PubMed

    Khosrow-Khavar, Farzad; Tavakolian, Kouhyar; Blaber, Andrew; Menon, Carlo

    2016-10-12

    The purpose of this research was to design a delineation algorithm that could detect specific fiducial points of the seismocardiogram (SCG) signal with or without using the electrocardiogram (ECG) R-wave as the reference point. The detected fiducial points were used to estimate cardiac time intervals. Due to complexity and sensitivity of the SCG signal, the algorithm was designed to robustly discard the low-quality cardiac cycles, which are the ones that contain unrecognizable fiducial points. The algorithm was trained on a dataset containing 48,318 manually annotated cardiac cycles. It was then applied to three test datasets: 65 young healthy individuals (dataset 1), 15 individuals above 44 years old (dataset 2), and 25 patients with previous heart conditions (dataset 3). The algorithm accomplished high prediction accuracy with the rootmean- square-error of less than 5 ms for all the test datasets. The algorithm overall mean detection rate per individual recordings (DRI) were 74, 68, and 42 percent for the three test datasets when concurrent ECG and SCG were used. For the standalone SCG case, the mean DRI was 32, 14 and 21 percent. When the proposed algorithm applied to concurrent ECG and SCG signals, the desired fiducial points of the SCG signal were successfully estimated with a high detection rate. For the standalone case, however, the algorithm achieved high prediction accuracy and detection rate for only the young individual dataset. The presented algorithm could be used for accurate and non-invasive estimation of cardiac time intervals.

  19. The Development of Design Guides for the Implementation of Multiprocessing Element Systems.

    DTIC Science & Technology

    1985-09-01

    Conclusions............................ 30 -~-.4 IMPLEMENTATION OF CHILL SIGNALS . COMMUNICATION PRIMITIVES ON A DISTRIBUTED SYSTEM ........................ 31...Architecture of a Distributed System .......... ........................... 32 4.2 Algorithm for the SEND Signal Operation ...... 35 4.3 Algorithm for the...elements operating concurrently. Such Multi Processing-element Systems are clearly going to be complex and it is important that the designers of such

  20. Reinforcement interval type-2 fuzzy controller design by online rule generation and q-value-aided ant colony optimization.

    PubMed

    Juang, Chia-Feng; Hsu, Chia-Hung

    2009-12-01

    This paper proposes a new reinforcement-learning method using online rule generation and Q-value-aided ant colony optimization (ORGQACO) for fuzzy controller design. The fuzzy controller is based on an interval type-2 fuzzy system (IT2FS). The antecedent part in the designed IT2FS uses interval type-2 fuzzy sets to improve controller robustness to noise. There are initially no fuzzy rules in the IT2FS. The ORGQACO concurrently designs both the structure and parameters of an IT2FS. We propose an online interval type-2 rule generation method for the evolution of system structure and flexible partitioning of the input space. Consequent part parameters in an IT2FS are designed using Q -values and the reinforcement local-global ant colony optimization algorithm. This algorithm selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of which are updated using reinforcement signals. The ORGQACO design method is applied to the following three control problems: 1) truck-backing control; 2) magnetic-levitation control; and 3) chaotic-system control. The ORGQACO is compared with other reinforcement-learning methods to verify its efficiency and effectiveness. Comparisons with type-1 fuzzy systems verify the noise robustness property of using an IT2FS.

  1. SOI layout decomposition for double patterning lithography on high-performance computer platforms

    NASA Astrophysics Data System (ADS)

    Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir

    2014-12-01

    In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.

  2. Solving a mathematical model integrating unequal-area facilities layout and part scheduling in a cellular manufacturing system by a genetic algorithm.

    PubMed

    Ebrahimi, Ahmad; Kia, Reza; Komijan, Alireza Rashidi

    2016-01-01

    In this article, a novel integrated mixed-integer nonlinear programming model is presented for designing a cellular manufacturing system (CMS) considering machine layout and part scheduling problems simultaneously as interrelated decisions. The integrated CMS model is formulated to incorporate several design features including part due date, material handling time, operation sequence, processing time, an intra-cell layout of unequal-area facilities, and part scheduling. The objective function is to minimize makespan, tardiness penalties, and material handling costs of inter-cell and intra-cell movements. Two numerical examples are solved by the Lingo software to illustrate the results obtained by the incorporated features. In order to assess the effects and importance of integration of machine layout and part scheduling in designing a CMS, two approaches, sequentially and concurrent are investigated and the improvement resulted from a concurrent approach is revealed. Also, due to the NP-hardness of the integrated model, an efficient genetic algorithm is designed. As a consequence, computational results of this study indicate that the best solutions found by GA are better than the solutions found by B&B in much less time for both sequential and concurrent approaches. Moreover, the comparisons between the objective function values (OFVs) obtained by sequential and concurrent approaches demonstrate that the OFV improvement is averagely around 17 % by GA and 14 % by B&B.

  3. Mitigation of adverse interactions in pairs of clinical practice guidelines using constraint logic programming.

    PubMed

    Wilk, Szymon; Michalowski, Wojtek; Michalowski, Martin; Farion, Ken; Hing, Marisela Mainegra; Mohapatra, Subhra

    2013-04-01

    We propose a new method to mitigate (identify and address) adverse interactions (drug-drug or drug-disease) that occur when a patient with comorbid diseases is managed according to two concurrently applied clinical practice guidelines (CPGs). A lack of methods to facilitate the concurrent application of CPGs severely limits their use in clinical practice and the development of such methods is one of the grand challenges for clinical decision support. The proposed method responds to this challenge. We introduce and formally define logical models of CPGs and other related concepts, and develop the mitigation algorithm that operates on these concepts. In the algorithm we combine domain knowledge encoded as interaction and revision operators using the constraint logic programming (CLP) paradigm. The operators characterize adverse interactions and describe revisions to logical models required to address these interactions, while CLP allows us to efficiently solve the logical models - a solution represents a feasible therapy that may be safely applied to a patient. The mitigation algorithm accepts two CPGs and available (likely incomplete) patient information. It reports whether mitigation has been successful or not, and on success it gives a feasible therapy and points at identified interactions (if any) together with the revisions that address them. Thus, we consider the mitigation algorithm as an alerting tool to support a physician in the concurrent application of CPGs that can be implemented as a component of a clinical decision support system. We illustrate our method in the context of two clinical scenarios involving a patient with duodenal ulcer who experiences an episode of transient ischemic attack. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.

    PubMed

    Shen, Lili; Guo, Jiming; Wang, Lei

    2018-06-06

    The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  5. Direct infusion mass spectrometry metabolomics dataset: a benchmark for data processing and quality control

    PubMed Central

    Kirwan, Jennifer A; Weber, Ralf J M; Broadhurst, David I; Viant, Mark R

    2014-01-01

    Direct-infusion mass spectrometry (DIMS) metabolomics is an important approach for characterising molecular responses of organisms to disease, drugs and the environment. Increasingly large-scale metabolomics studies are being conducted, necessitating improvements in both bioanalytical and computational workflows to maintain data quality. This dataset represents a systematic evaluation of the reproducibility of a multi-batch DIMS metabolomics study of cardiac tissue extracts. It comprises of twenty biological samples (cow vs. sheep) that were analysed repeatedly, in 8 batches across 7 days, together with a concurrent set of quality control (QC) samples. Data are presented from each step of the workflow and are available in MetaboLights. The strength of the dataset is that intra- and inter-batch variation can be corrected using QC spectra and the quality of this correction assessed independently using the repeatedly-measured biological samples. Originally designed to test the efficacy of a batch-correction algorithm, it will enable others to evaluate novel data processing algorithms. Furthermore, this dataset serves as a benchmark for DIMS metabolomics, derived using best-practice workflows and rigorous quality assessment. PMID:25977770

  6. 23 CFR 751.23 - Concurrent junkyard control and right-of-way projects.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 23 Highways 1 2014-04-01 2014-04-01 false Concurrent junkyard control and right-of-way projects. 751.23 Section 751.23 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RIGHT-OF-WAY AND ENVIRONMENT JUNKYARD CONTROL AND ACQUISITION § 751.23 Concurrent junkyard control and right-of...

  7. 23 CFR 751.23 - Concurrent junkyard control and right-of-way projects.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Concurrent junkyard control and right-of-way projects. 751.23 Section 751.23 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RIGHT-OF-WAY AND ENVIRONMENT JUNKYARD CONTROL AND ACQUISITION § 751.23 Concurrent junkyard control and right-of...

  8. 23 CFR 751.23 - Concurrent junkyard control and right-of-way projects.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 23 Highways 1 2012-04-01 2012-04-01 false Concurrent junkyard control and right-of-way projects. 751.23 Section 751.23 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RIGHT-OF-WAY AND ENVIRONMENT JUNKYARD CONTROL AND ACQUISITION § 751.23 Concurrent junkyard control and right-of...

  9. 23 CFR 751.23 - Concurrent junkyard control and right-of-way projects.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 23 Highways 1 2013-04-01 2013-04-01 false Concurrent junkyard control and right-of-way projects. 751.23 Section 751.23 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RIGHT-OF-WAY AND ENVIRONMENT JUNKYARD CONTROL AND ACQUISITION § 751.23 Concurrent junkyard control and right-of...

  10. A novel approach to quality improvement in a safety-net practice: concurrent peer review visits.

    PubMed

    Fiscella, Kevin; Volpe, Ellen; Winters, Paul; Brown, Melissa; Idris, Amna; Harren, Tricia

    2010-12-01

    Concurrent peer review visits are structured office visits conducted by clinician peers of the primary care clinician that are specifically designed to reduce competing demands, clinical inertia, and bias. We assessed whether a single concurrent peer review visit reduced clinical inertia and improved control of hypertension, hyperlipidemia, and diabetes control among underserved patients. We conducted a randomized encouragement trial to evaluate concurrent peer review visits with a community health center. Seven hundred twenty-seven patients with hypertension, hyperlipidemia, and/or diabetes who were not at goal for systolic blood pressure (SBP), low-density lipoprotein cholesterol (LDL-C), and/or glycated hemoglobin (A1c) were randomly assigned to an invitation to participate in a concurrent peer review visit or to usual care. We compared change in these measures using mixed models and rates of therapeutic intensification during concurrent peer review visits with control visits. One hundred seventy-one patients completed a concurrent peer review visit. SBP improved significantly (p < .01) more among those completing concurrent peer review visits than among those who failed to respond to a concurrent peer review invitation or those randomized to usual care. There were no differences seen for changes in LDL-C or A1c. Concurrent peer review visits were associated with statistically significant greater clinician intensification of blood pressure (p < .001), lipid (p < .001), and diabetes (p < .005) treatment than either for control visits for patients in either the nonresponse group or usual care group. Concurrent peer review visits represent a promising strategy for improving blood pressure control and improving therapeutic intensification in community health centers.

  11. Ad hoc cost analysis of the new gastrointestinal bleeding algorithm in patients with ventricular assist device.

    PubMed

    Hirose, Hitoshi; Sarosiek, Konrad; Cavarocchi, Nicholas C

    2014-01-01

    Gastrointestinal bleed (GIB) is a known complication in patients receiving nonpulsatile ventricular assist devices (VAD). Previously, we reported a new algorithm for the workup of GIB in VAD patients using deep bowel enteroscopy. In this new algorithm, patients underwent fewer procedures, received less transfusions, and took less time to make the diagnosis than the traditional GIB algorithm group. Concurrently, we reviewed the cost-effectiveness of this new algorithm compared with the traditional workup. The procedure charges for the diagnosis and treatment of each episode of GIB was ~ $2,902 in the new algorithm group versus ~ $9,013 in the traditional algorithm group (p < 0.0001). Following the new algorithm in VAD patients with GIB resulted in fewer transfusions and diagnostic tests while attaining a substantial cost savings per episode of bleeding.

  12. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  13. Coherence number as a discrete quantum resource

    NASA Astrophysics Data System (ADS)

    Chin, Seungbeom

    2017-10-01

    We introduce a discrete coherence monotone named the coherence number, which is a generalization of the coherence rank to mixed states. After defining the coherence number in a manner similar to that of the Schmidt number in entanglement theory, we present a necessary and sufficient condition of the coherence number for a coherent state to be converted to an entangled state of nonzero k concurrence (a member of the generalized concurrence family with 2 ≤k ≤d ). As an application of the coherence number to a practical quantum system, Grover's search algorithm of N items is considered. We show that the coherence number remains N and falls abruptly when the success probability of a searching process becomes maximal. This phenomenon motivates us to analyze the depletion pattern of Cc(N ) (the last member of the generalized coherence concurrence, nonzero when the coherence number is N ), which turns out to be an optimal resource for the process since it is completely consumed to finish the searching task. The generalization of the original Grover algorithm with arbitrary (mixed) initial states is also discussed, which reveals the boundary condition for the coherence to be monotonically decreasing under the process.

  14. A High-Level Language for Modeling Algorithms and Their Properties

    NASA Astrophysics Data System (ADS)

    Akhtar, Sabina; Merz, Stephan; Quinson, Martin

    Designers of concurrent and distributed algorithms usually express them using pseudo-code. In contrast, most verification techniques are based on more mathematically-oriented formalisms such as state transition systems. This conceptual gap contributes to hinder the use of formal verification techniques. Leslie Lamport introduced PlusCal, a high-level algorithmic language that has the "look and feel" of pseudo-code, but is equipped with a precise semantics and includes a high-level expression language based on set theory. PlusCal models can be compiled to TLA + and verified using the model checker tlc.

  15. On a concurrent element-by-element preconditioned conjugate gradient algorithm for multiple load cases

    NASA Technical Reports Server (NTRS)

    Watson, Brian; Kamat, M. P.

    1990-01-01

    Element-by-element preconditioned conjugate gradient (EBE-PCG) algorithms have been advocated for use in parallel/vector processing environments as being superior to the conventional LDL(exp T) decomposition algorithm for single load cases. Although there may be some advantages in using such algorithms for a single load case, when it comes to situations involving multiple load cases, the LDL(exp T) decomposition algorithm would appear to be decidedly more cost-effective. The authors have outlined an EBE-PCG algorithm suitable for multiple load cases and compared its effectiveness to the highly efficient LDL(exp T) decomposition scheme. The proposed algorithm offers almost no advantages over the LDL(exp T) algorithm for the linear problems investigated on the Alliant FX/8. However, there may be some merit in the algorithm in solving nonlinear problems with load incrementation, but that remains to be investigated.

  16. 23 CFR 751.23 - Concurrent junkyard control and right-of-way projects.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...-way projects. The State is encouraged to coordinate junkyard control and highway right-of-way projects. Expenses incurred in furtherance of concurrent projects shall be prorated between projects. ... 23 Highways 1 2010-04-01 2010-04-01 false Concurrent junkyard control and right-of-way projects...

  17. Simulator for concurrent processing data flow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.; Stoughton, John W.; Mielke, Roland R.

    1992-01-01

    A software simulator capability of simulating execution of an algorithm graph on a given system under the Algorithm to Architecture Mapping Model (ATAMM) rules is presented. ATAMM is capable of modeling the execution of large-grained algorithms on distributed data flow architectures. Investigating the behavior and determining the performance of an ATAMM based system requires the aid of software tools. The ATAMM Simulator presented is capable of determining the performance of a system without having to build a hardware prototype. Case studies are performed on four algorithms to demonstrate the capabilities of the ATAMM Simulator. Simulated results are shown to be comparable to the experimental results of the Advanced Development Model System.

  18. Communications oriented programming of parallel iterative solutions of sparse linear systems

    NASA Technical Reports Server (NTRS)

    Patrick, M. L.; Pratt, T. W.

    1986-01-01

    Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.

  19. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods.

    PubMed

    Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Towards Unmanned Systems for Dismounted Operations in the Canadian Forces

    DTIC Science & Technology

    2011-01-01

    LIDAR , and RADAR) and lower power/mass, passive imaging techniques such as structure from motion and simultaneous localisation and mapping ( SLAM ...sensors and learning algorithms. 5.1.2 Simultaneous localisation and mapping SLAM algorithms concurrently estimate a robot pose and a map of unique...locations and vehicle pose are part of the SLAM state vector and are estimated in each update step. AISS developed a monocular camera-based SLAM

  1. The influence of omniscient technology on cryptography

    NASA Astrophysics Data System (ADS)

    Huang, Weihong; Li, Jian

    2009-07-01

    Scholars agree that concurrent algorithms are an interesting new topic in the field of cyberinformatics, and hackers worldwide concur. In fact, few end-users would disagree with the evaluation of architecture. We propose a Bayesian tool for harnessing massive multiplayer online role-playing games (FIRER), which we use to prove that the well-known ubiquitous algorithm for the improvement of wide-area networks by Karthik Lakshminarayanan is in Co-NP.

  2. Optimal Regulation of Virtual Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall Anese, Emiliano; Guggilam, Swaroop S.; Simonetto, Andrea

    This paper develops a real-time algorithmic framework for aggregations of distributed energy resources (DERs) in distribution networks to provide regulation services in response to transmission-level requests. Leveraging online primal-dual-type methods for time-varying optimization problems and suitable linearizations of the nonlinear AC power-flow equations, we believe this work establishes the system-theoretic foundation to realize the vision of distribution-level virtual power plants. The optimization framework controls the output powers of dispatchable DERs such that, in aggregate, they respond to automatic-generation-control and/or regulation-services commands. This is achieved while concurrently regulating voltages within the feeder and maximizing customers' and utility's performance objectives. Convergence andmore » tracking capabilities are analytically established under suitable modeling assumptions. Simulations are provided to validate the proposed approach.« less

  3. Adaptive control for accelerators

    DOEpatents

    Eaton, Lawrie E.; Jachim, Stephen P.; Natter, Eckard F.

    1991-01-01

    An adaptive feedforward control loop is provided to stabilize accelerator beam loading of the radio frequency field in an accelerator cavity during successive pulses of the beam into the cavity. A digital signal processor enables an adaptive algorithm to generate a feedforward error correcting signal functionally determined by the feedback error obtained by a beam pulse loading the cavity after the previous correcting signal was applied to the cavity. Each cavity feedforward correcting signal is successively stored in the digital processor and modified by the feedback error resulting from its application to generate the next feedforward error correcting signal. A feedforward error correcting signal is generated by the digital processor in advance of the beam pulse to enable a composite correcting signal and the beam pulse to arrive concurrently at the cavity.

  4. Algorithms and software for nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.

    1989-01-01

    The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.

  5. Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2012-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less

  6. Reliability and validity of bilateral ankle accelerometer algorithms for activity recognition and walking speed after stroke.

    PubMed

    Dobkin, Bruce H; Xu, Xiaoyu; Batalin, Maxim; Thomas, Seth; Kaiser, William

    2011-08-01

    Outcome measures of mobility for large stroke trials are limited to timed walks for short distances in a laboratory, step counters and ordinal scales of disability and quality of life. Continuous monitoring and outcome measurements of the type and quantity of activity in the community would provide direct data about daily performance, including compliance with exercise and skills practice during routine care and clinical trials. Twelve adults with impaired ambulation from hemiparetic stroke and 6 healthy controls wore triaxial accelerometers on their ankles. Walking speed for repeated outdoor walks was determined by machine-learning algorithms and compared to a stopwatch calculation of speed for distances not known to the algorithm. The reliability of recognizing walking, exercise, and cycling by the algorithms was compared to activity logs. A high correlation was found between stopwatch-measured outdoor walking speed and algorithm-calculated speed (Pearson coefficient, 0.98; P=0.001) and for repeated measures of algorithm-derived walking speed (P=0.01). Bouts of walking >5 steps, variations in walking speed, cycling, stair climbing, and leg exercises were correctly identified during a day in the community. Compared to healthy subjects, those with stroke were, as expected, more sedentary and slower, and their gait revealed high paretic-to-unaffected leg swing ratios. Test-retest reliability and concurrent and construct validity are high for activity pattern-recognition Bayesian algorithms developed from inertial sensors. This ratio scale data can provide real-world monitoring and outcome measurements of lower extremity activities and walking speed for stroke and rehabilitation studies.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with themore » associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.« less

  8. Multiresolution molecular mechanics: Implementation and efficiency

    NASA Astrophysics Data System (ADS)

    Biyikli, Emre; To, Albert C.

    2017-01-01

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.

  9. Accurate and consistent automatic seismocardiogram annotation without concurrent ECG.

    PubMed

    Laurin, A; Khosrow-Khavar, F; Blaber, A P; Tavakolian, Kouhyar

    2016-09-01

    Seismocardiography (SCG) is the measurement of vibrations in the sternum caused by the beating of the heart. Precise cardiac mechanical timings that are easily obtained from SCG are critically dependent on accurate identification of fiducial points. So far, SCG annotation has relied on concurrent ECG measurements. An algorithm capable of annotating SCG without the use any other concurrent measurement was designed. We subjected 18 participants to graded lower body negative pressure. We collected ECG and SCG, obtained R peaks from the former, and annotated the latter by hand, using these identified peaks. We also annotated the SCG automatically. We compared the isovolumic moment timings obtained by hand to those obtained using our algorithm. Mean  ±  confidence interval of the percentage of accurately annotated cardiac cycles were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for levels of negative pressure 0, -20, -30, -40, and  -50 mmHg. LF/HF ratios, the relative power of low-frequency variations to high-frequency variations in heart beat intervals, obtained from isovolumic moments were also compared to those obtained from R peaks. The mean differences  ±  confidence interval were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for increasing levels of negative pressure. The accuracy and consistency of the algorithm enables the use of SCG as a stand-alone heart monitoring tool in healthy individuals at rest, and could serve as a basis for an eventual application in pathological cases.

  10. A Comprehensive Two-Dimensional Retention Time Alignment Algorithm To Enhance Chemometric Analysis of Comprehensive Two-Dimensional Separation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, Karisa M.; Wood, Lianna F.; Wright, Bob W.

    2005-12-01

    A comprehensive two-dimensional (2D) retention time alignment algorithm was developed using a novel indexing scheme. The algorithm is termed comprehensive because it functions to correct the entire chromatogram in both dimensions and it preserves the separation information in both dimensions. Although the algorithm is demonstrated by correcting comprehensive two-dimensional gas chromatography (GC x GC) data, the algorithm is designed to correct shifting in all forms of 2D separations, such as LC x LC, LC x CE, CE x CE, and LC x GC. This 2D alignment algorithm was applied to three different data sets composed of replicate GC x GCmore » separations of (1) three 22-component control mixtures, (2) three gasoline samples, and (3) three diesel samples. The three data sets were collected using slightly different temperature or pressure programs to engender significant retention time shifting in the raw data and then demonstrate subsequent corrections of that shifting upon comprehensive 2D alignment of the data sets. Thirty 12-min GC x GC separations from three 22-component control mixtures were used to evaluate the 2D alignment performance (10 runs/mixture). The average standard deviation of the first column retention time improved 5-fold from 0.020 min (before alignment) to 0.004 min (after alignment). Concurrently, the average standard deviation of second column retention time improved 4-fold from 3.5 ms (before alignment) to 0.8 ms (after alignment). Alignment of the 30 control mixture chromatograms took 20 min. The quantitative integrity of the GC x GC data following 2D alignment was also investigated. The mean integrated signal was determined for all components in the three 22-component mixtures for all 30 replicates. The average percent difference in the integrated signal for each component before and after alignment was 2.6%. Singular value decomposition (SVD) was applied to the 22-component control mixture data before and after alignment to show the restoration of trilinearity to the data, since trilinearity benefits chemometric analysis. By applying comprehensive 2D retention time alignment to all three data sets (control mixtures, gasoline samples, and diesel samples), classification by principal component analysis (PCA) substantially improved, resulting in 100% accurate scores clustering.« less

  11. Adaptive Traffic Route Control in QoS Provisioning for Cognitive Radio Technology with Heterogeneous Wireless Systems

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshiaki; Ueda, Tetsuro; Obana, Sadao

    As one of the dynamic spectrum access technologies, “cognitive radio technology,” which aims to improve the spectrum efficiency, has been studied. In cognitive radio networks, each node recognizes radio conditions, and according to them, optimizes its wireless communication routes. Cognitive radio systems integrate the heterogeneous wireless systems not only by switching over them but also aggregating and utilizing them simultaneously. The adaptive control of switchover use and concurrent use of various wireless systems will offer a stable and flexible wireless communication. In this paper, we propose the adaptive traffic route control scheme that provides high quality of service (QoS) for cognitive radio technology, and examine the performance of the proposed scheme through the field trials and computer simulations. The results of field trials show that the adaptive route control according to the radio conditions improves the user IP throughput by more than 20% and reduce the one-way delay to less than 1/6 with the concurrent use of IEEE802.16 and IEEE802.11 wireless media. Moreover, the simulation results assuming hundreds of mobile terminals reveal that the number of users receiving the required QoS of voice over IP (VoIP) service and the total network throughput of FTP users increase by more than twice at the same time with the proposed algorithm. The proposed adaptive traffic route control scheme can enhance the performances of the cognitive radio technologies by providing the appropriate communication routes for various applications to satisfy their required QoS.

  12. Concurrent initialization for Bearing-Only SLAM.

    PubMed

    Munguía, Rodrigo; Grau, Antoni

    2010-01-01

    Simultaneous Localization and Mapping (SLAM) is perhaps the most fundamental problem to solve in robotics in order to build truly autonomous mobile robots. The sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of range sensors as sonar rings or lasers. However, cameras have become more and more used, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. Unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a novel and robust method, called Concurrent Initialization, is presented which is inspired by having the complementary advantages of the Undelayed and Delayed methods that represent the most common approaches for addressing the problem. The key is to use concurrently two kinds of feature representations for both undelayed and delayed stages of the estimation. The simulations results show that the proposed method surpasses the performance of previous schemes.

  13. A Comparison of PETSC Library and HPF Implementations of an Archetypal PDE Computation

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Keyes, David E.; Mehrotra, Piyush

    1997-01-01

    Two paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation a nonlinear, structured-grid partial differential equation boundary value problem using the same algorithm on the same hardware. Both paradigms, parallel libraries represented by Argonne's PETSC, and parallel languages represented by the Portland Group's HPF, are found to be easy to use for this problem class, and both are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under either paradigm includes specification of the data partitioning (corresponding to a geometrically simple decomposition of the domain of the PDE). Programming in SPAM style for the PETSC library requires writing the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global- to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm, introducing concurrency through subdomain blocking (an effort similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Correctness and scalability are cross-validated on up to 32 nodes of an IBM SP2.

  14. Adapting a Navier-Stokes code to the ICL-DAP

    NASA Technical Reports Server (NTRS)

    Grosch, C. E.

    1985-01-01

    The results of an experiment are reported, i.c., to adapt a Navier-Stokes code, originally developed on a serial computer, to concurrent processing on the CL Distributed Array Processor (DAP). The algorithm used in solving the Navier-Stokes equations is briefly described. The architecture of the DAP and DAP FORTRAN are also described. The modifications of the algorithm so as to fit the DAP are given and discussed. Finally, performance results are given and conclusions are drawn.

  15. Dynamically Reconfigurable Systolic Array Accelorators

    NASA Technical Reports Server (NTRS)

    Dasu, Aravind (Inventor); Barnes, Robert C. (Inventor)

    2014-01-01

    A polymorphic systolic array framework that works in conjunction with an embedded microprocessor on an FPGA, that allows for dynamic and complimentary scaling of acceleration levels of two algorithms active concurrently on the FPGA. Use is made of systolic arrays and hardware-software co-design to obtain an efficient multi-application acceleration system. The flexible and simple framework allows hosting of a broader range of algorithms and extendable to more complex applications in the area of aerospace embedded systems.

  16. Geo-information processing service composition for concurrent tasks: A QoS-aware game theory approach

    NASA Astrophysics Data System (ADS)

    Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong

    2012-10-01

    Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.

  17. Adaptive Highly Flexible Multifunctional Wings for Active and Passive Control and Energy Harvesting with Piezoelectric Materials

    NASA Astrophysics Data System (ADS)

    Tsushima, Natsuki

    The purpose of this dissertation is to develop an analytical framework to analyze highly flexible multifunctional wings with integral active and passive control and energy harvesting using piezoelectric transduction. Such multifunctional wings can be designed to enhance aircraft flight performance, especially to support long-endurance flights and to be adaptive to various flight conditions. This work also demonstrates the feasibility of the concept of piezoelectric multifunctional wings for the concurrent active control and energy harvesting to improve the aeroelastic performance of high-altitude long-endurance unmanned air vehicles. Functions of flutter suppression, gust alleviation, energy generation, and energy storage are realized for the performance improvement. The multifunctional wings utilize active and passive piezoelectric effects for the efficient adaptive control and energy harvesting. An energy storage with thin-film lithium-ion battery cells is designed for harvested energy accumulation. Piezoelectric effects are included in a strain-based geometrically nonlinear beam formulation for the numerical studies. The resulting structural dynamic equations are coupled with a finite-state unsteady aerodynamic formulation, allowing for piezoelectric energy harvesting and active actuation with the nonlinear aeroelastic system. This development helps to provide an integral electro-aeroelastic solution of concurrent active piezoelectric control and energy harvesting for wing vibrations, with the consideration of the geometrical nonlinear effects of slender multifunctional wings. A multifunctional structure for active actuation is designed by introducing anisotropic piezoelectric laminates. Linear quadratic regulator and linear quadratic Gaussian controllers are implemented for the active control of wing vibrations including post-flutter limit-cycle oscillations and gust perturbation. An adaptive control algorithm for gust perturbation is then developed. In this research, the active piezoelectric actuation is applied as the primary approach for flutter suppression, with energy harvesting, as a secondary passive approach, concurrently working to provide an additional damping effect on the wing vibration. The multifunctional wing also generates extra energy from residual wing vibration. This research presents a comprehensive approach for an effective flutter suppression and gust alleviation of highly flexible piezoelectric wings, while allowing to harvest the residual vibration energy. Numerical results with the multifunctional wing concept show the potential to improve the aircraft performance from both aeroelastic stability and energy consumption aspects.

  18. CUDASW++ 3.0: accelerating Smith-Waterman protein database search by coupling CPU and GPU SIMD instructions.

    PubMed

    Liu, Yongchao; Wirawan, Adrianto; Schmidt, Bertil

    2013-04-04

    The maximal sensitivity for local alignments makes the Smith-Waterman algorithm a popular choice for protein sequence database search based on pairwise alignment. However, the algorithm is compute-intensive due to a quadratic time complexity. Corresponding runtimes are further compounded by the rapid growth of sequence databases. We present CUDASW++ 3.0, a fast Smith-Waterman protein database search algorithm, which couples CPU and GPU SIMD instructions and carries out concurrent CPU and GPU computations. For the CPU computation, this algorithm employs SSE-based vector execution units as accelerators. For the GPU computation, we have investigated for the first time a GPU SIMD parallelization, which employs CUDA PTX SIMD video instructions to gain more data parallelism beyond the SIMT execution model. Moreover, sequence alignment workloads are automatically distributed over CPUs and GPUs based on their respective compute capabilities. Evaluation on the Swiss-Prot database shows that CUDASW++ 3.0 gains a performance improvement over CUDASW++ 2.0 up to 2.9 and 3.2, with a maximum performance of 119.0 and 185.6 GCUPS, on a single-GPU GeForce GTX 680 and a dual-GPU GeForce GTX 690 graphics card, respectively. In addition, our algorithm has demonstrated significant speedups over other top-performing tools: SWIPE and BLAST+. CUDASW++ 3.0 is written in CUDA C++ and PTX assembly languages, targeting GPUs based on the Kepler architecture. This algorithm obtains significant speedups over its predecessor: CUDASW++ 2.0, by benefiting from the use of CPU and GPU SIMD instructions as well as the concurrent execution on CPUs and GPUs. The source code and the simulated data are available at http://cudasw.sourceforge.net.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath

    The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementationmore » of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.« less

  20. Preventing Run-Time Bugs at Compile-Time Using Advanced C++

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neswold, Richard

    When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.

  1. An Element-Based Concurrent Partitioner for Unstructured Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Ferraro, Robert D.

    1996-01-01

    A concurrent partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The partitioner uses an element-based partitioning strategy. Its main advantage over the more conventional node-based partitioning strategy is its modular programming approach to the development of parallel applications. The partitioner first partitions element centroids using a recursive inertial bisection algorithm. Elements and nodes then migrate according to the partitioned centroids, using a data request communication template for unpredictable incoming messages. Our scalable implementation is contrasted to a non-scalable implementation which is a straightforward parallelization of a sequential partitioner.

  2. Minimizing inner product data dependencies in conjugate gradient iteration

    NASA Technical Reports Server (NTRS)

    Vanrosendale, J.

    1983-01-01

    The amount of concurrency available in conjugate gradient iteration is limited by the summations required in the inner product computations. The inner product of two vectors of length N requires time c log(N), if N or more processors are available. This paper describes an algebraic restructuring of the conjugate gradient algorithm which minimizes data dependencies due to inner product calculations. After an initial start up, the new algorithm can perform a conjugate gradient iteration in time c*log(log(N)).

  3. Interprocedural Analysis and the Verification of Concurrent Programs

    DTIC Science & Technology

    2009-01-01

    SSPE ) problem is to compute a regular expression that represents paths(s, v) for all vertices v in the graph. The syntax of regular expressions is as...follows: r ::= ∅ | ε | e | r1 ∪ r2 | r1.r2 | r∗, where e stands for an edge in G. We can use any algorithm for SSPE to compute regular expressions for...a closed representation of loops provides an exponential speedup.2 Tarjan’s path-expression algorithm solves the SSPE problem efficiently. It uses

  4. Study of the mapping of Navier-Stokes algorithms onto multiple-instruction/multiple-data-stream computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.; Stevens, K.

    1984-01-01

    Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.

  5. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  6. A HIERARCHIAL STOCHASTIC MODEL OF LARGE SCALE ATMOSPHERIC CIRCULATION PATTERNS AND MULTIPLE STATION DAILY PRECIPITATION

    EPA Science Inventory

    A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...

  7. A hierarchical, automated target recognition algorithm for a parallel analog processor

    NASA Technical Reports Server (NTRS)

    Woodward, Gail; Padgett, Curtis

    1997-01-01

    A hierarchical approach is described for an automated target recognition (ATR) system, VIGILANTE, that uses a massively parallel, analog processor (3DANN). The 3DANN processor is capable of performing 64 concurrent inner products of size 1x4096 every 250 nanoseconds.

  8. Developing a novel hierarchical approach for multiscale structural reliability predictions for ultra-high consequence applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emery, John M.; Coffin, Peter; Robbins, Brian A.

    Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins withmore » a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.« less

  9. A split finite element algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1979-01-01

    An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.

  10. Active Learning Using Hint Information.

    PubMed

    Li, Chun-Liang; Ferng, Chun-Sung; Lin, Hsuan-Tien

    2015-08-01

    The abundance of real-world data and limited labeling budget calls for active learning, an important learning paradigm for reducing human labeling efforts. Many recently developed active learning algorithms consider both uncertainty and representativeness when making querying decisions. However, exploiting representativeness with uncertainty concurrently usually requires tackling sophisticated and challenging learning tasks, such as clustering. In this letter, we propose a new active learning framework, called hinted sampling, which takes both uncertainty and representativeness into account in a simpler way. We design a novel active learning algorithm within the hinted sampling framework with an extended support vector machine. Experimental results validate that the novel active learning algorithm can result in a better and more stable performance than that achieved by state-of-the-art algorithms. We also show that the hinted sampling framework allows improving another active learning algorithm designed from the transductive support vector machine.

  11. 40 CFR 798.5275 - Sex-linked recessive lethal test in drosophila melanogaster.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Control groups—(i) Concurrent controls. Concurrent positive and negative (vehicle) controls shall be... the negative (vehicle) control group shall be determined by the availability of appropriate laboratory... the appropriate control group will strongly influence the number of treated chromosomes that must be...

  12. 40 CFR 798.5275 - Sex-linked recessive lethal test in drosophila melanogaster.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Control groups—(i) Concurrent controls. Concurrent positive and negative (vehicle) controls shall be... the negative (vehicle) control group shall be determined by the availability of appropriate laboratory... the appropriate control group will strongly influence the number of treated chromosomes that must be...

  13. 40 CFR 798.5275 - Sex-linked recessive lethal test in drosophila melanogaster.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Control groups—(i) Concurrent controls. Concurrent positive and negative (vehicle) controls shall be... the negative (vehicle) control group shall be determined by the availability of appropriate laboratory... the appropriate control group will strongly influence the number of treated chromosomes that must be...

  14. 40 CFR 798.5275 - Sex-linked recessive lethal test in drosophila melanogaster.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Control groups—(i) Concurrent controls. Concurrent positive and negative (vehicle) controls shall be... the negative (vehicle) control group shall be determined by the availability of appropriate laboratory... the appropriate control group will strongly influence the number of treated chromosomes that must be...

  15. 40 CFR 798.5275 - Sex-linked recessive lethal test in drosophila melanogaster.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Control groups—(i) Concurrent controls. Concurrent positive and negative (vehicle) controls shall be... the negative (vehicle) control group shall be determined by the availability of appropriate laboratory... the appropriate control group will strongly influence the number of treated chromosomes that must be...

  16. Effect of concurrent walking and interlocutor distance on conversational speech intensity and rate in Parkinson's disease.

    PubMed

    McCaig, Cassandra M; Adams, Scott G; Dykstra, Allyson D; Jog, Mandar

    2016-01-01

    Previous studies have demonstrated a negative effect of concurrent walking and talking on gait in Parkinson's disease (PD) but there is limited information about the effect of concurrent walking on speech production. The present study examined the effect of sitting, standing, and three concurrent walking tasks (slow, normal, fast) on conversational speech intensity and speech rate in fifteen individuals with hypophonia related to idiopathic Parkinson's disease (PD) and fourteen age-equivalent controls. Interlocuter (talker-to-talker) distance effects and walking speed were also examined. Concurrent walking was found to produce a significant increase in speech intensity, relative to standing and sitting, in both the control and PD groups. Faster walking produced significantly greater speech intensity than slower walking. Concurrent walking had no effect on speech rate. Concurrent walking and talking produced significant reductions in walking speed in both the control and PD groups. In general, the results of the present study indicate that concurrent walking tasks and the speed of concurrent walking can have a significant positive effect on conversational speech intensity. These positive, "energizing" effects need to be given consideration in future attempts to develop a comprehensive model of speech intensity regulation and they may have important implications for the development of new evaluation and treatment procedures for individuals with hypophonia related to PD. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  17. Strategies for Energy Efficient Resource Management of Hybrid Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dong; Supinski, Bronis de; Schulz, Martin

    2013-01-01

    Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less

  18. Clinical value of concurrent radiochemotherapy in cervical cancer and comparison of ultrasonography findings before and after radiochemotherapy.

    PubMed

    Yan, W M; Li, X Z; Yu, Z L; Zhang, J; Sun, X G

    2015-04-17

    Herein, we investigated the clinical value of concurrent radiochemotherapy for patients with advanced cervical cancer and its effects on adverse clinical symptoms. Forty patients with cervical cancer were recruited from January 2011 to January 2014 for this study. Participants were randomly allocated into a test or control group, with 20 patients in each group. Patients in the test group were treated with concurrent radiochemotherapy, whereas patients in the control group received only traditional radiotherapy. At the end of the observation period, clinical efficacy in the two groups was compared. Patients were followed up for 2 years, and the rates of recurrence, survival, and complications were compared; ultrasonographic findings before and after radiotherapy were also correlated. Patients in the test group who received concurrent radiochemotherapy showed significantly higher clinical efficacy than the control group at the end of treatment cycles. After 2 years of follow-up, the rates of recurrence, mortality, and complications were all significantly lower in the test group than in the control group (P < 0.05). Comparison of ultrasonographic findings before and after radiochemotherapy showed that the size of the tumor was significantly smaller in patients after concurrent radiochemotherapy. Compared with traditional radiotherapy, concurrent radiochemotherapy significantly improved clinical outcomes in patients with advanced cervical cancer. Concurrent radiochemotherapy also enhanced the rate of survival and decreased the rate of relapse, with enhanced clinical safety and no significant side effects. Thus, concurrent radiochemotherapy can be more broadly applied in the treatment of advanced cervical cancer.

  19. Database searching and accounting of multiplexed precursor and product ion spectra from the data independent analysis of simple and complex peptide mixtures.

    PubMed

    Li, Guo-Zhong; Vissers, Johannes P C; Silva, Jeffrey C; Golick, Dan; Gorenstein, Marc V; Geromanos, Scott J

    2009-03-01

    A novel database search algorithm is presented for the qualitative identification of proteins over a wide dynamic range, both in simple and complex biological samples. The algorithm has been designed for the analysis of data originating from data independent acquisitions, whereby multiple precursor ions are fragmented simultaneously. Measurements used by the algorithm include retention time, ion intensities, charge state, and accurate masses on both precursor and product ions from LC-MS data. The search algorithm uses an iterative process whereby each iteration incrementally increases the selectivity, specificity, and sensitivity of the overall strategy. Increased specificity is obtained by utilizing a subset database search approach, whereby for each subsequent stage of the search, only those peptides from securely identified proteins are queried. Tentative peptide and protein identifications are ranked and scored by their relative correlation to a number of models of known and empirically derived physicochemical attributes of proteins and peptides. In addition, the algorithm utilizes decoy database techniques for automatically determining the false positive identification rates. The search algorithm has been tested by comparing the search results from a four-protein mixture, the same four-protein mixture spiked into a complex biological background, and a variety of other "system" type protein digest mixtures. The method was validated independently by data dependent methods, while concurrently relying on replication and selectivity. Comparisons were also performed with other commercially and publicly available peptide fragmentation search algorithms. The presented results demonstrate the ability to correctly identify peptides and proteins from data independent acquisition strategies with high sensitivity and specificity. They also illustrate a more comprehensive analysis of the samples studied; providing approximately 20% more protein identifications, compared to a more conventional data directed approach using the same identification criteria, with a concurrent increase in both sequence coverage and the number of modified peptides.

  20. Control of force during rapid visuomotor force-matching tasks can be described by discrete time PID control algorithms.

    PubMed

    Dideriksen, Jakob Lund; Feeney, Daniel F; Almuklass, Awad M; Enoka, Roger M

    2017-08-01

    Force trajectories during isometric force-matching tasks involving isometric contractions vary substantially across individuals. In this study, we investigated if this variability can be explained by discrete time proportional, integral, derivative (PID) control algorithms with varying model parameters. To this end, we analyzed the pinch force trajectories of 24 subjects performing two rapid force-matching tasks with visual feedback. Both tasks involved isometric contractions to a target force of 10% maximal voluntary contraction. One task involved a single action (pinch) and the other required a double action (concurrent pinch and wrist extension). 50,000 force trajectories were simulated with a computational neuromuscular model whose input was determined by a PID controller with different PID gains and frequencies at which the controller adjusted muscle commands. The goal was to find the best match between each experimental force trajectory and all simulated trajectories. It was possible to identify one realization of the PID controller that matched the experimental force produced during each task for most subjects (average index of similarity: 0.87 ± 0.12; 1 = perfect similarity). The similarities for both tasks were significantly greater than that would be expected by chance (single action: p = 0.01; double action: p = 0.04). Furthermore, the identified control frequencies in the simulated PID controller with the greatest similarities decreased as task difficulty increased (single action: 4.0 ± 1.8 Hz; double action: 3.1 ± 1.3 Hz). Overall, the results indicate that discrete time PID controllers are realistic models for the neural control of force in rapid force-matching tasks involving isometric contractions.

  1. A Framework for the Development of Scalable Heterogeneous Robot Teams with Dynamically Distributed Processing

    NASA Astrophysics Data System (ADS)

    Martin, Adrian

    As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.

  2. Concurrent Initialization for Bearing-Only SLAM

    PubMed Central

    Munguía, Rodrigo; Grau, Antoni

    2010-01-01

    Simultaneous Localization and Mapping (SLAM) is perhaps the most fundamental problem to solve in robotics in order to build truly autonomous mobile robots. The sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of range sensors as sonar rings or lasers. However, cameras have become more and more used, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. Unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a novel and robust method, called Concurrent Initialization, is presented which is inspired by having the complementary advantages of the Undelayed and Delayed methods that represent the most common approaches for addressing the problem. The key is to use concurrently two kinds of feature representations for both undelayed and delayed stages of the estimation. The simulations results show that the proposed method surpasses the performance of previous schemes. PMID:22294884

  3. Validation of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm

    PubMed Central

    Lawson, Sara Nicole; Zaluski, Neal; Petrie, Amanda; Arnold, Cathy; Basran, Jenny

    2013-01-01

    ABSTRACT Purpose: To investigate the concurrent validity of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm (FSRA). Method: A total of 29 older adults (mean age 77.7 [SD 4.0] y) residing in an independent-living senior's complex who met inclusion criteria completed a demographic questionnaire and the components of the FSRA and Berg Balance Scale (BBS). The FSRA consists of the Elderly Fall Screening Test (EFST) and the Multi-factor Falls Questionnaire (MFQ); it is designed to categorize individuals into low, moderate, or high fall-risk categories to determine appropriate management pathways. A predictive model for probability of fall risk, based on previous research, was used to determine concurrent validity of the FSRI. Results: The FSRA placed 79% of participants into the low-risk category, whereas the predictive model found the probability of fall risk to range from 0.04 to 0.74, with a mean of 0.35 (SD 0.25). No statistically significant correlation was found between the FSRA and the predictive model for probability of fall risk (Spearman's ρ=0.35, p=0.06). Conclusion: The FSRA lacks concurrent validity relative to to a previously established model of fall risk and appears to over-categorize individuals into the low-risk group. Further research on the FSRA as an adequate tool to screen community-dwelling older adults for fall risk is recommended. PMID:24381379

  4. Global synchronization algorithms for the Intel iPSC/860

    NASA Technical Reports Server (NTRS)

    Seidel, Steven R.; Davis, Mark A.

    1992-01-01

    In a distributed memory multicomputer that has no global clock, global processor synchronization can only be achieved through software. Global synchronization algorithms are used in tridiagonal systems solvers, CFD codes, sequence comparison algorithms, and sorting algorithms. They are also useful for event simulation, debugging, and for solving mutual exclusion problems. For the Intel iPSC/860 in particular, global synchronization can be used to ensure the most effective use of the communication network for operations such as the shift, where each processor in a one-dimensional array or ring concurrently sends a message to its right (or left) neighbor. Three global synchronization algorithms are considered for the iPSC/860: the gysnc() primitive provided by Intel, the PICL primitive sync0(), and a new recursive doubling synchronization (RDS) algorithm. The performance of these algorithms is compared to the performance predicted by communication models of both the long and forced message protocols. Measurements of the cost of shift operations preceded by global synchronization show that the RDS algorithm always synchronizes the nodes more precisely and costs only slightly more than the other two algorithms.

  5. A Model-based Approach to Reactive Self-Configuring Systems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Nayak, P. Pandurang

    1996-01-01

    This paper describes Livingstone, an implemented kernel for a self-reconfiguring autonomous system, that is reactive and uses component-based declarative models. The paper presents a formal characterization of the representation formalism used in Livingstone, and reports on our experience with the implementation in a variety of domains. Livingstone's representation formalism achieves broad coverage of hybrid software/hardware systems by coupling the concurrent transition system models underlying concurrent reactive languages with the discrete qualitative representations developed in model-based reasoning. We achieve a reactive system that performs significant deductions in the sense/response loop by drawing on our past experience at building fast prepositional conflict-based algorithms for model-based diagnosis, and by framing a model-based configuration manager as a prepositional, conflict-based feedback controller that generates focused, optimal responses. Livingstone automates all these tasks using a single model and a single core deductive engine, thus making significant progress towards achieving a central goal of model-based reasoning. Livingstone, together with the HSTS planning and scheduling engine and the RAPS executive, has been selected as the core autonomy architecture for Deep Space One, the first spacecraft for NASA's New Millennium program.

  6. A programmable two-qubit quantum processor in silicon

    NASA Astrophysics Data System (ADS)

    Watson, T. F.; Philips, S. G. J.; Kawakami, E.; Ward, D. R.; Scarlino, P.; Veldhorst, M.; Savage, D. E.; Lagally, M. G.; Friesen, Mark; Coppersmith, S. N.; Eriksson, M. A.; Vandersypen, L. M. K.

    2018-03-01

    Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch–Josza algorithm and the Grover search algorithm—canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85–89 per cent and concurrences of 73–82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.

  7. 40 CFR 798.5500 - Differential growth inhibition of repair proficient and repair deficient bacteria: “Bacterial DNA...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., tissues or techniques may also be appropriate. (6) Control groups—(i) Concurrent controls. Concurrent positive, negative, and vehicle controls should be included in each assay. (ii) Negative controls. The... CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5500 Differential growth...

  8. 40 CFR 798.5500 - Differential growth inhibition of repair proficient and repair deficient bacteria: “Bacterial DNA...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., tissues or techniques may also be appropriate. (6) Control groups—(i) Concurrent controls. Concurrent positive, negative, and vehicle controls should be included in each assay. (ii) Negative controls. The... CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5500 Differential growth...

  9. 40 CFR 798.5395 - In vivo mammalian bone marrow cytogenetics tests: Micronucleus assay.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... five female and five male animals per experimental and control group shall be used. Thus, 10 animals...) Assignment to groups. Animals shall be randomized and assigned to treatment and control groups. (4) Control groups—(i) Concurrent controls. Concurrent positive and negative (vehicle) controls shall be included in...

  10. 40 CFR 798.5395 - In vivo mammalian bone marrow cytogenetics tests: Micronucleus assay.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... five female and five male animals per experimental and control group shall be used. Thus, 10 animals...) Assignment to groups. Animals shall be randomized and assigned to treatment and control groups. (4) Control groups—(i) Concurrent controls. Concurrent positive and negative (vehicle) controls shall be included in...

  11. 40 CFR 798.5395 - In vivo mammalian bone marrow cytogenetics tests: Micronucleus assay.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... five female and five male animals per experimental and control group shall be used. Thus, 10 animals...) Assignment to groups. Animals shall be randomized and assigned to treatment and control groups. (4) Control groups—(i) Concurrent controls. Concurrent positive and negative (vehicle) controls shall be included in...

  12. 40 CFR 798.5500 - Differential growth inhibition of repair proficient and repair deficient bacteria: “Bacterial DNA...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., tissues or techniques may also be appropriate. (6) Control groups—(i) Concurrent controls. Concurrent positive, negative, and vehicle controls should be included in each assay. (ii) Negative controls. The... CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5500 Differential growth...

  13. 40 CFR 798.5500 - Differential growth inhibition of repair proficient and repair deficient bacteria: “Bacterial DNA...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., tissues or techniques may also be appropriate. (6) Control groups—(i) Concurrent controls. Concurrent positive, negative, and vehicle controls should be included in each assay. (ii) Negative controls. The... CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5500 Differential growth...

  14. 40 CFR 798.5395 - In vivo mammalian bone marrow cytogenetics tests: Micronucleus assay.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... five female and five male animals per experimental and control group shall be used. Thus, 10 animals...) Assignment to groups. Animals shall be randomized and assigned to treatment and control groups. (4) Control groups—(i) Concurrent controls. Concurrent positive and negative (vehicle) controls shall be included in...

  15. 40 CFR 798.5395 - In vivo mammalian bone marrow cytogenetics tests: Micronucleus assay.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... five female and five male animals per experimental and control group shall be used. Thus, 10 animals...) Assignment to groups. Animals shall be randomized and assigned to treatment and control groups. (4) Control groups—(i) Concurrent controls. Concurrent positive and negative (vehicle) controls shall be included in...

  16. The Concept of Nondeterminism: Its Development and Implications for Teaching

    ERIC Educational Resources Information Center

    Armoni, Michal; Ben-Ari, Mordechai

    2009-01-01

    Nondeterminism is a fundamental concept in computer science that appears in various contexts such as automata theory, algorithms and concurrent computation. We present a taxonomy of the different ways that nondeterminism can be defined and used; the categories of the taxonomy are domain, nature, implementation, consistency, execution and…

  17. Hardware realization of an SVM algorithm implemented in FPGAs

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorier, Matthieu; Mubarak, Misbah; Ross, Rob

    Two-tiered direct network topologies such as Dragonflies have been proposed for future post-petascale and exascale machines, since they provide a high-radix, low-diameter, fast interconnection network. Such topologies call for redesigning MPI collective communication algorithms in order to attain the best performance. Yet as increasingly more applications share a machine, it is not clear how these topology-aware algorithms will react to interference with concurrent jobs accessing the same network. In this paper, we study three topology-aware broadcast algorithms, including one designed by ourselves. We evaluate their performance through event-driven simulation for small- and large-sized broadcasts (in terms of both data sizemore » and number of processes). We study the effect of different routing mechanisms on the topology-aware collective algorithms, as well as their sensitivity to network contention with other jobs. Our results show that while topology-aware algorithms dramatically reduce link utilization, their advantage in terms of latency is more limited.« less

  19. A Machine-Checked Proof of A State-Space Construction Algorithm

    NASA Technical Reports Server (NTRS)

    Catano, Nestor; Siminiceanu, Radu I.

    2010-01-01

    This paper presents the correctness proof of Saturation, an algorithm for generating state spaces of concurrent systems, implemented in the SMART tool. Unlike the Breadth First Search exploration algorithm, which is easy to understand and formalise, Saturation is a complex algorithm, employing a mutually-recursive pair of procedures that compute a series of non-trivial, nested local fixed points, corresponding to a chaotic fixed point strategy. A pencil-and-paper proof of Saturation exists, but a machine checked proof had never been attempted. The key element of the proof is the characterisation theorem of saturated nodes in decision diagrams, stating that a saturated node represents a set of states encoding a local fixed-point with respect to firing all events affecting only the node s level and levels below. For our purpose, we have employed the Prototype Verification System (PVS) for formalising the Saturation algorithm, its data structures, and for conducting the proofs.

  20. Research on a Method of Geographical Information Service Load Balancing

    NASA Astrophysics Data System (ADS)

    Li, Heyuan; Li, Yongxing; Xue, Zhiyong; Feng, Tao

    2018-05-01

    With the development of geographical information service technologies, how to achieve the intelligent scheduling and high concurrent access of geographical information service resources based on load balancing is a focal point of current study. This paper presents an algorithm of dynamic load balancing. In the algorithm, types of geographical information service are matched with the corresponding server group, then the RED algorithm is combined with the method of double threshold effectively to judge the load state of serve node, finally the service is scheduled based on weighted probabilistic in a certain period. At the last, an experiment system is built based on cluster server, which proves the effectiveness of the method presented in this paper.

  1. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    DOE PAGES

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less

  2. 40 CFR 798.3320 - Combined chronic toxicity/oncogenicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... concurrent control for those groups not intended for early sacrifice. At least 40 rodents (20 females and 20 males) should be used for satellite dose group(s) and the satellite control group. The purpose of the... percent at the time of termination. (2) Control groups. (i) A concurrent control group (50 females and 50...

  3. 40 CFR 798.3320 - Combined chronic toxicity/oncogenicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... concurrent control for those groups not intended for early sacrifice. At least 40 rodents (20 females and 20 males) should be used for satellite dose group(s) and the satellite control group. The purpose of the... percent at the time of termination. (2) Control groups. (i) A concurrent control group (50 females and 50...

  4. 40 CFR 798.3320 - Combined chronic toxicity/oncogenicity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... concurrent control for those groups not intended for early sacrifice. At least 40 rodents (20 females and 20 males) should be used for satellite dose group(s) and the satellite control group. The purpose of the... percent at the time of termination. (2) Control groups. (i) A concurrent control group (50 females and 50...

  5. 40 CFR 798.3320 - Combined chronic toxicity/oncogenicity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... concurrent control for those groups not intended for early sacrifice. At least 40 rodents (20 females and 20 males) should be used for satellite dose group(s) and the satellite control group. The purpose of the... percent at the time of termination. (2) Control groups. (i) A concurrent control group (50 females and 50...

  6. 40 CFR 798.3320 - Combined chronic toxicity/oncogenicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... concurrent control for those groups not intended for early sacrifice. At least 40 rodents (20 females and 20 males) should be used for satellite dose group(s) and the satellite control group. The purpose of the... percent at the time of termination. (2) Control groups. (i) A concurrent control group (50 females and 50...

  7. Risk factors for concurrent bacteremia in adult patients with dengue.

    PubMed

    Thein, Tun-Linn; Ng, Ee-Ling; Yeang, Ming S; Leo, Yee-Sin; Lye, David C

    2017-06-01

    Bacteremia in dengue may occur with common exposure to pathogens in association with severe organ impairment or severe dengue, which may result in death. Cohort studies identifying risk factors for concurrent bacteremia among patients with dengue are rare. We conducted a retrospective case-control study of adult patients with dengue who were admitted to the Department of Infectious Diseases at Tan Tock Seng Hospital, Singapore from 2004 to 2008. For each case of dengue with concurrent bacteremia (within the first 72 hours of admission), we selected four controls without bacteremia, who were matched on year of infection and dengue confirmation method. Conditional logistic regression was performed to identify risk factors for concurrent bacteremia. Among 9,553 patients with dengue, 29 (0.3%) had bacteremia. Eighteen of these patients (62.1%) had concurrent bacteremia. The predominant bacteria were Staphylococcus aureus, one of which was a methicillin-resistant strain. Dengue shock syndrome occurred more frequently and hospital stay was longer among cases than among controls. Three cases did not survive, whereas none of the controls died. In multivariate analysis, being critically ill at hospital presentation was independently associated with 15 times the likelihood of a patient with dengue having concurrent bacteremia. Concurrent bacteremia in adult patients with dengue is uncommon but presents atypically and results in more deaths and longer hospital stay. Given the associated mortality, collection of blood cultures and empiric antibiotic therapy may be considered in patients who are critically ill. Copyright © 2015. Published by Elsevier B.V.

  8. Improved obstacle avoidance and navigation for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Giri, Binod; Cho, Hyunsu; Williams, Benjamin C.; Tann, Hokchhay; Shakya, Bicky; Bharam, Vishal; Ahlgren, David J.

    2015-01-01

    This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 Intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.

  9. A data distributed parallel algorithm for ray-traced volume rendering

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.

    1993-01-01

    This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.

  10. Model-Based Reinforcement Learning under Concurrent Schedules of Reinforcement in Rodents

    ERIC Educational Resources Information Center

    Huh, Namjung; Jo, Suhyun; Kim, Hoseok; Sul, Jung Hoon; Jung, Min Whan

    2009-01-01

    Reinforcement learning theories postulate that actions are chosen to maximize a long-term sum of positive outcomes based on value functions, which are subjective estimates of future rewards. In simple reinforcement learning algorithms, value functions are updated only by trial-and-error, whereas they are updated according to the decision-maker's…

  11. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  12. The navigation system of the JPL robot

    NASA Technical Reports Server (NTRS)

    Thompson, A. M.

    1977-01-01

    The control structure of the JPL research robot and the operations of the navigation subsystem are discussed. The robot functions as a network of interacting concurrent processes distributed among several computers and coordinated by a central executive. The results of scene analysis are used to create a segmented terrain model in which surface regions are classified by traversibility. The model is used by a path planning algorithm, PATH, which uses tree search methods to find the optimal path to a goal. In PATH, the search space is defined dynamically as a consequence of node testing. Maze-solving and the use of an associative data base for context dependent node generation are also discussed. Execution of a planned path is accomplished by a feedback guidance process with automatic error recovery.

  13. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  14. Efficient Approximation Algorithms for Weighted $b$-Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Arif; Pothen, Alex; Mostofa Ali Patwary, Md.

    2016-01-01

    We describe a half-approximation algorithm, b-Suitor, for computing a b-Matching of maximum weight in a graph with weights on the edges. b-Matching is a generalization of the well-known Matching problem in graphs, where the objective is to choose a subset of M edges in the graph such that at most a specified number b(v) of edges in M are incident on each vertex v. Subject to this restriction we maximize the sum of the weights of the edges in M. We prove that the b-Suitor algorithm computes the same b-Matching as the one obtained by the greedy algorithm for themore » problem. We implement the algorithm on serial and shared-memory parallel processors, and compare its performance against a collection of approximation algorithms that have been proposed for the Matching problem. Our results show that the b-Suitor algorithm outperforms the Greedy and Locally Dominant edge algorithms by one to two orders of magnitude on a serial processor. The b-Suitor algorithm has a high degree of concurrency, and it scales well up to 240 threads on a shared memory multiprocessor. The b-Suitor algorithm outperforms the Locally Dominant edge algorithm by a factor of fourteen on 16 cores of an Intel Xeon multiprocessor.« less

  15. Concurrent information affects response inhibition processes via the modulation of theta oscillations in cognitive control networks.

    PubMed

    Chmielewski, Witold X; Mückschel, Moritz; Dippel, Gabriel; Beste, Christian

    2016-11-01

    Inhibiting responses is a challenge, where the outcome (partly) depends on the situational context. In everyday situations, response inhibition performance might be altered when irrelevant input is presented simultaneously with the information relevant for response inhibition. More specifically, irrelevant concurrent information may either brace or interfere with response-relevant information, depending on whether these inputs are redundant or conflicting. The aim of this study is to investigate neurophysiological mechanisms and the network underlying such modulations using EEG beamforming as method. The results show that in comparison to a baseline condition without concurrent information, response inhibition performance can be aggravated or facilitated by manipulating the extent of conflict via concurrent input. This depends on whether the requirement for cognitive control is high, as in conflicting trials, or whether it is low, as in redundant trials. In line with this, the total theta frequency power decreases in a right hemispheric orbitofrontal response inhibition network including the SFG, MFG, and SMA, when concurrent redundant information facilitates response inhibition processes. Vice versa, theta activity in a left-hemispheric response inhibition network (i.e., SFG, MFG, and IFG) increases, when conflicting concurrent information compromises response inhibition processes. We conclude that concurrent information bi-directionally shifts response inhibition performance and modulates the network architecture underlying theta oscillations which are signaling different levels of the need for cognitive control.

  16. Writing abilities and the role of working memory in children with symptoms of attention deficit and hyperactivity disorder.

    PubMed

    Capodieci, Agnese; Serafini, Alice; Dessuki, Alice; Cornoldi, Cesare

    2018-02-20

    The writing abilities of children with ADHD symptoms were examined in a simple dictation task, and then in two conditions with concurrent verbal or visuospatial working memory (WM) loads. The children with ADHD symptoms generally made more spelling mistakes than controls, and the concurrent loads impaired their performance, but with partly different effects. The concurrent verbal WM task prompted an increase in the phonological errors, while the concurrent visuospatial WM task prompted more non-phonological errors, matching the Italian phonology, but not the Italian orthography. In the ADHD group, the children proving better able to cope with a concurrent verbal WM load had a better spelling performance too. The ADHD and control groups had a similar handwriting speed, but the former group's writing quality was poorer. Our results suggest that WM supports writing skills, and that children with ADHD symptoms have general writing difficulties, but strength in coping with concurrent verbal information may support their spelling performance.

  17. NASA Tech Briefs, June 2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Topics covered include: COTS MEMS Flow-Measurement Probes; Measurement of an Evaporating Drop on a Reflective Substrate; Airplane Ice Detector Based on a Microwave Transmission Line; Microwave/Sonic Apparatus Measures Flow and Density in Pipe; Reducing Errors by Use of Redundancy in Gravity Measurements; Membrane-Based Water Evaporator for a Space Suit; Compact Microscope Imaging System with Intelligent Controls; Chirped-Superlattice, Blocked-Intersubband QWIP; Charge-Dissipative Electrical Cables; Deep-Sea Video Cameras Without Pressure Housings; RFID and Memory Devices Fabricated Integrally on Substrates; Analyzing Dynamics of Cooperating Spacecraft; Spacecraft Attitude Maneuver Planning Using Genetic Algorithms; Forensic Analysis of Compromised Computers; Document Concurrence System; Managing an Archive of Images; MPT Prediction of Aircraft-Engine Fan Noise; Improving Control of Two Motor Controllers; Electro-deionization Using Micro-separated Bipolar Membranes; Safer Electrolytes for Lithium-Ion Cells; Rotating Reverse-Osmosis for Water Purification; Making Precise Resonators for Mesoscale Vibratory Gyroscopes; Robotic End Effectors for Hard-Rock Climbing; Improved Nutation Damper for a Spin-Stabilized Spacecraft; Exhaust Nozzle for a Multitube Detonative Combustion Engine; Arc-Second Pointer for Balloon-Borne Astronomical Instrument; Compact, Automated Centrifugal Slide-Staining System; Two-Armed, Mobile, Sensate Research Robot; Compensating for Effects of Humidity on Electronic Noses; Brush/Fin Thermal Interfaces; Multispectral Scanner for Monitoring Plants; Coding for Communication Channels with Dead-Time Constraints; System for Better Spacing of Airplanes En Route; Algorithm for Training a Recurrent Multilayer Perceptron; Orbiter Interface Unit and Early Communication System; White-Light Nulling Interferometers for Detecting Planets; and Development of Methodology for Programming Autonomous Agents.

  18. Multi-objective optimization of GENIE Earth system models.

    PubMed

    Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J

    2009-07-13

    The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.

  19. A self-testing dynamic RAM chip

    NASA Astrophysics Data System (ADS)

    You, Y.; Hayes, J. P.

    1985-02-01

    A novel approach to making very large dynamic RAM chips self-testing is presented. It is based on two main concepts: on-chip generation of regular test sequences with very high fault coverage, and concurrent testing of storage-cell arrays to reduce overall testing time. The failure modes of a typical 64 K RAM employing one-transistor cells are analyzed to identify their test requirements. A comprehensive test generation algorithm that can be implemented with minimal modification to a standard cell layout is derived. The self-checking peripheral circuits necessary to implement this testing algorithm are described, and the self-testing RAM is briefly evaluated.

  20. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  1. Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw

    2002-01-01

    The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.

  2. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    NASA Astrophysics Data System (ADS)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  3. Modelling and simulating decision processes of linked lives: An approach based on concurrent processes and stochastic race.

    PubMed

    Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M

    2017-10-01

    Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.

  4. Effects of cacheing on multitasking efficiency and programming strategy on an ELXSI 6400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montry, G.R.; Benner, R.E.

    1985-12-01

    The impact of a cache/shared memory architecture, and, in particular, the cache coherency problem, upon concurrent algorithm and program development is discussed. In this context, a simple set of programming strategies are proposed which streamline code development and improve code performance when multitasking in a cache/shared memory or distributed memory environment.

  5. Concurrent Learning of Control in Multi agent Sequential Decision Tasks

    DTIC Science & Technology

    2018-04-17

    Concurrent Learning of Control in Multi-agent Sequential Decision Tasks The overall objective of this project was to develop multi-agent reinforcement...learning (MARL) approaches for intelligent agents to autonomously learn distributed control policies in decentral- ized partially observable...shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number

  6. Real time software for a heat recovery steam generator control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdes, R.; Delgadillo, M.A.; Chavez, R.

    1995-12-31

    This paper is addressed to the development and successful implementation of a real time software for the Heat Recovery Steam Generator (HRSG) control system of a Combined Cycle Power Plant. The real time software for the HRSG control system physically resides in a Control and Acquisition System (SAC) which is a component of a distributed control system (DCS). The SAC is a programmable controller. The DCS installed at the Gomez Palacio power plant in Mexico accomplishes the functions of logic, analog and supervisory control. The DCS is based on microprocessors and the architecture consists of workstations operating as a Man-Machinemore » Interface (MMI), linked to SAC controllers by means of a communication system. The HRSG real time software is composed of an operating system, drivers, dedicated computer program and application computer programs. The operating system used for the development of this software was the MultiTasking Operating System (MTOS). The application software developed at IIE for the HRSG control system basically consisted of a set of digital algorithms for the regulation of the main process variables at the HRSG. By using the multitasking feature of MTOS, the algorithms are executed pseudo concurrently. In this way, the applications programs continuously use the resources of the operating system to perform their functions through a uniform service interface. The application software of the HRSG consist of three tasks, each of them has dedicated responsibilities. The drivers were developed for the handling of hardware resources of the SAC controller which in turn allows the signals acquisition and data communication with a MMI. The dedicated programs were developed for hardware diagnostics, task initializations, access to the data base and fault tolerance. The application software and the dedicated software for the HRSG control system was developed using C programming language due to compactness, portability and efficiency.« less

  7. Learning Multirobot Hose Transportation and Deployment by Distributed Round-Robin Q-Learning.

    PubMed

    Fernandez-Gauna, Borja; Etxeberria-Agiriano, Ismael; Graña, Manuel

    2015-01-01

    Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.

  8. Concurrent simulation of a parallel jaw end effector

    NASA Technical Reports Server (NTRS)

    Bynum, Bill

    1985-01-01

    A system of programs developed to aid in the design and development of the command/response protocol between a parallel jaw end effector and the strategic planner program controlling it are presented. The system executes concurrently with the LISP controlling program to generate a graphical image of the end effector that moves in approximately real time in response to commands sent from the controlling program. Concurrent execution of the simulation program is useful for revealing flaws in the communication command structure arising from the asynchronous nature of the message traffic between the end effector and the strategic planner. Software simulation helps to minimize the number of hardware changes necessary to the microprocessor driving the end effector because of changes in the communication protocol. The simulation of other actuator devices can be easily incorporated into the system of programs by using the underlying support that was developed for the concurrent execution of the simulation process and the communication between it and the controlling program.

  9. Self-controlled concurrent feedback facilitates the learning of the final approach phase in a fixed-base flight simulator.

    PubMed

    Huet, Michaël; Jacobs, David M; Camachon, Cyril; Goulon, Cedric; Montagne, Gilles

    2009-12-01

    This study (a) compares the effectiveness of different types of feedback for novices who learn to land a virtual aircraft in a fixed-base flight simulator and (b) analyzes the informational variables that learners come to use after practice. An extensive body of research exists concerning the informational variables that allow successful landing. In contrast, few studies have examined how the attention of pilots can be directed toward these sources of information. In this study, 15 participants were asked to land a virtual Cessna 172 on 245 trials while trying to follow the glide-slope area as accurately as possible. Three groups of participants practiced under different feedback conditions: with self-controlled concurrent feedback (the self-controlled group), with imposed concurrent feedback (the yoked group), or without concurrent feedback (the control group). The self-controlled group outperformed the yoked group, which in turn outperformed the control group. Removing or manipulating specific sources of information during transfer tests had different effects for different individuals. However, removing the cockpit from the visual scene had a detrimental effect on the performance of the majority of the participants. Self-controlled concurrent feedback helps learners to more quickly attune to the informational variables that allow them to control the aircraft during the approach phase. Knowledge concerning feedback schedules can be used for the design of optimal practice methods for student pilots, and knowledge about the informational variables used by expert performers has implications for the design of cockpits and runways that facilitate the detection of these variables.

  10. Fast prediction of RNA-RNA interaction using heuristic algorithm.

    PubMed

    Montaseri, Soheila

    2015-01-01

    Interaction between two RNA molecules plays a crucial role in many medical and biological processes such as gene expression regulation. In this process, an RNA molecule prohibits the translation of another RNA molecule by establishing stable interactions with it. Some algorithms have been formed to predict the structure of the RNA-RNA interaction. High computational time is a common challenge in most of the presented algorithms. In this context, a heuristic method is introduced to accurately predict the interaction between two RNAs based on minimum free energy (MFE). This algorithm uses a few dot matrices for finding the secondary structure of each RNA and binding sites between two RNAs. Furthermore, a parallel version of this method is presented. We describe the algorithm's concurrency and parallelism for a multicore chip. The proposed algorithm has been performed on some datasets including CopA-CopT, R1inv-R2inv, Tar-Tar*, DIS-DIS, and IncRNA54-RepZ in Escherichia coli bacteria. The method has high validity and efficiency, and it is run in low computational time in comparison to other approaches.

  11. Amoeba-inspired Tug-of-War algorithms for exploration-exploitation dilemma in extended Bandit Problem.

    PubMed

    Aono, Masashi; Kim, Song-Ju; Hara, Masahiko; Munakata, Toshinori

    2014-03-01

    The true slime mold Physarum polycephalum, a single-celled amoeboid organism, is capable of efficiently allocating a constant amount of intracellular resource to its pseudopod-like branches that best fit the environment where dynamic light stimuli are applied. Inspired by the resource allocation process, the authors formulated a concurrent search algorithm, called the Tug-of-War (TOW) model, for maximizing the profit in the multi-armed Bandit Problem (BP). A player (gambler) of the BP should decide as quickly and accurately as possible which slot machine to invest in out of the N machines and faces an "exploration-exploitation dilemma." The dilemma is a trade-off between the speed and accuracy of the decision making that are conflicted objectives. The TOW model maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a nonlocal correlation among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). Owing to this nonlocal correlation, the TOW model can efficiently manage the dilemma. In this study, we extend the TOW model to apply it to a stretched variant of BP, the Extended Bandit Problem (EBP), which is a problem of selecting the best M-tuple of the N machines. We demonstrate that the extended TOW model exhibits better performances for 2-tuple-3-machine and 2-tuple-4-machine instances of EBP compared with the extended versions of well-known algorithms for BP, the ϵ-Greedy and SoftMax algorithms, particularly in terms of its short-term decision-making capability that is essential for the survival of the amoeba in a hostile environment. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Hebbian self-organizing integrate-and-fire networks for data clustering.

    PubMed

    Landis, Florian; Ott, Thomas; Stoop, Ruedi

    2010-01-01

    We propose a Hebbian learning-based data clustering algorithm using spiking neurons. The algorithm is capable of distinguishing between clusters and noisy background data and finds an arbitrary number of clusters of arbitrary shape. These properties render the approach particularly useful for visual scene segmentation into arbitrarily shaped homogeneous regions. We present several application examples, and in order to highlight the advantages and the weaknesses of our method, we systematically compare the results with those from standard methods such as the k-means and Ward's linkage clustering. The analysis demonstrates that not only the clustering ability of the proposed algorithm is more powerful than those of the two concurrent methods, the time complexity of the method is also more modest than that of its generally used strongest competitor.

  13. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  14. A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.

  15. Informational and linguistic analysis of large genomic sequence collections via efficient Hadoop cluster algorithms.

    PubMed

    Ferraro Petrillo, Umberto; Roscigno, Gianluca; Cattaneo, Giuseppe; Giancarlo, Raffaele

    2018-06-01

    Information theoretic and compositional/linguistic analysis of genomes have a central role in bioinformatics, even more so since the associated methodologies are becoming very valuable also for epigenomic and meta-genomic studies. The kernel of those methods is based on the collection of k-mer statistics, i.e. how many times each k-mer in {A,C,G,T}k occurs in a DNA sequence. Although this problem is computationally very simple and efficiently solvable on a conventional computer, the sheer amount of data available now in applications demands to resort to parallel and distributed computing. Indeed, those type of algorithms have been developed to collect k-mer statistics in the realm of genome assembly. However, they are so specialized to this domain that they do not extend easily to the computation of informational and linguistic indices, concurrently on sets of genomes. Following the well-established approach in many disciplines, and with a growing success also in bioinformatics, to resort to MapReduce and Hadoop to deal with 'Big Data' problems, we present KCH, the first set of MapReduce algorithms able to perform concurrently informational and linguistic analysis of large collections of genomic sequences on a Hadoop cluster. The benchmarking of KCH that we provide indicates that it is quite effective and versatile. It is also competitive with respect to the parallel and distributed algorithms highly specialized to k-mer statistics collection for genome assembly problems. In conclusion, KCH is a much needed addition to the growing number of algorithms and tools that use MapReduce for bioinformatics core applications. The software, including instructions for running it over Amazon AWS, as well as the datasets are available at http://www.di-srv.unisa.it/KCH. umberto.ferraro@uniroma1.it. Supplementary data are available at Bioinformatics online.

  16. Preoperative chemoradiotherapy with capecitabine versus protracted infusion 5-fluorouracil for rectal cancer: A matched-pair analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Prajnan; Lin, Edward H.; Bhatia, Sumita

    2006-12-01

    Purpose: To retrospectively compare the acute toxicity, pathologic response, relapse rates, and survival in rectal cancer patients treated with preoperative radiotherapy (RT) and either concurrent capecitabine or concurrent protracted infusion 5-fluorouracil (5-FU). Methods: Between June 2001 and February 2004, 89 patients with nonmetastatic rectal adenocarcinoma were treated with preoperative RT and concurrent capecitabine, followed by mesorectal excision. These patients were individually matched by clinical T and N stage (as determined by endoscopic ultrasound and CT scans) with 89 control patients treated with preoperative RT and concurrent protracted infusion 5-FU between September 1997 and August 2002. Results: In each group, 5more » patients (6%) had Grade 3-4 toxicity during chemoradiotherapy. The pathologic complete response rate was 21% with capecitabine and 12% with protracted infusion 5-FU (p = 0.19). Of the 89 patients in the capecitabine group and 89 in the 5-FU group, 46 (52%) and 55 (62%), respectively, had downstaging of the T stage after chemoradiotherapy (p = 0.20). The estimated 3-year local control (p = 0.15), distant control (p = 0.86), and overall survival (p = 0.12) rate was 94.4%, 86.3%, and 89.8% for patients treated with capecitabine and 98.6%, 86.6%, and 96.4% for patients treated with protracted infusion 5-FU, respectively. Conclusion: Preoperative concurrent capecitabine and concurrent protracted infusion 5-FU were both well tolerated, with similar, low rates of Grade 3-4 acute toxicity. No significant differences were seen in the pathologic response, local and distant recurrence, or overall survival among patients treated with preoperative RT and concurrent capecitabine compared with those treated with RT and concurrent protracted infusion 5-FU.« less

  17. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  18. Parallel State Space Construction for a Model Checking Based on Maximality Semantics

    NASA Astrophysics Data System (ADS)

    El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine

    2009-03-01

    The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.

  19. Relative Reinforcer Rates and Magnitudes Do Not Control Concurrent Choice Independently

    ERIC Educational Resources Information Center

    Elliffe, Douglas; Davison, Michael; Landon, Jason

    2008-01-01

    One assumption of the matching approach to choice is that different independent variables control choice independently of each other. We tested this assumption for reinforcer rate and magnitude in an extensive parametric experiment. Five pigeons responded for food reinforcement on switching-key concurrent variable-interval variable-interval…

  20. Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing.

    PubMed

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2008-08-01

    This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

  1. Transition Flight Control Room Automation

    NASA Technical Reports Server (NTRS)

    Welborn, Curtis Ray

    1990-01-01

    The Workstation Prototype Laboratory is currently working on a number of projects which we feel can have a direct impact on ground operations automation. These projects include: The Fuel Cell Monitoring System (FCMS), which will monitor and detect problems with the fuel cells on the Shuttle. FCMS will use a combination of rules (forward/backward) and multi-threaded procedures which run concurrently with the rules, to implement the malfunction algorithms of the EGIL flight controllers. The combination of rule based reasoning and procedural reasoning allows us to more easily map the malfunction algorithms into a real-time system implementation. A graphical computation language (AGCOMPL). AGCOMPL is an experimental prototype to determine the benefits and drawbacks of using a graphical language to design computations (algorithms) to work on Shuttle or Space Station telemetry and trajectory data. The design of a system which will allow a model of an electrical system, including telemetry sensors, to be configured on the screen graphically using previously defined electrical icons. This electrical model would then be used to generate rules and procedures for detecting malfunctions in the electrical components of the model. A generic message management (GMM) system. GMM is being designed as a message management system for real-time applications which send advisory messages to a user. The primary purpose of GMM is to reduce the risk of overloading a user with information when multiple failures occurs and in assisting the developer in devising an explanation facility. The emphasis of our work is to develop practical tools and techniques, while determining the feasibility of a given approach, including identification of appropriate software tools to support research, application and tool building activities.

  2. Transition flight control room automation

    NASA Technical Reports Server (NTRS)

    Welborn, Curtis Ray

    1990-01-01

    The Workstation Prototype Laboratory is currently working on a number of projects which can have a direct impact on ground operations automation. These projects include: (1) The fuel cell monitoring system (FCMS), which will monitor and detect problems with the fuel cells on the shuttle. FCMS will use a combination of rules (forward/backward) and multithreaded procedures, which run concurrently with the rules, to implement the malfunction algorithms of the EGIL flight controllers. The combination of rule-based reasoning and procedural reasoning allows us to more easily map the malfunction algorithms into a real-time system implementation. (2) A graphical computation language (AGCOMPL) is an experimental prototype to determine the benefits and drawbacks of using a graphical language to design computations (algorithms) to work on shuttle or space station telemetry and trajectory data. (3) The design of a system will allow a model of an electrical system, including telemetry sensors, to be configured on the screen graphically using previously defined electrical icons. This electrical model would then be used to generate rules and procedures for detecting malfunctions in the electrical components of the model. (4) A generic message management (GMM) system is being designed for real-time applications as a message management system which sends advisory messages to a user. The primary purpose of GMM is to reduce the risk of overloading a user with information when multiple failures occur and to assist the developer in the devising an explanation facility. The emphasis of our work is to develop practical tools and techniques, including identification of appropriate software tools to support research, application, and tool building activities, while determining the feasibility of a given approach.

  3. PSO (Particle Swarm Optimization) for Interpretation of Magnetic Anomalies Caused by Simple Geometrical Structures

    NASA Astrophysics Data System (ADS)

    Essa, Khalid S.; Elhussein, Mahmoud

    2018-04-01

    A new efficient approach to estimate parameters that controlled the source dimensions from magnetic anomaly profile data in light of PSO algorithm (particle swarm optimization) has been presented. The PSO algorithm has been connected in interpreting the magnetic anomaly profiles data onto a new formula for isolated sources embedded in the subsurface. The model parameters deciphered here are the depth of the body, the amplitude coefficient, the angle of effective magnetization, the shape factor and the horizontal coordinates of the source. The model parameters evaluated by the present technique, generally the depth of the covered structures were observed to be in astounding concurrence with the real parameters. The root mean square (RMS) error is considered as a criterion in estimating the misfit between the observed and computed anomalies. Inversion of noise-free synthetic data, noisy synthetic data which contains different levels of random noise (5, 10, 15 and 20%) as well as multiple structures and in additional two real-field data from USA and Egypt exhibits the viability of the approach. Thus, the final results of the different parameters are matched with those given in the published literature and from geologic results.

  4. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    NASA Astrophysics Data System (ADS)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    2017-08-01

    Molecular dynamics simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules, but they are limited by the time scale barrier. That is, we may not obtain properties' efficiently because we need to run microseconds or longer simulations using femtosecond time steps. To overcome this time scale barrier, we can use the weighted ensemble (WE) method, a powerful enhanced sampling method that efficiently samples thermodynamic and kinetic properties. However, the WE method requires an appropriate partitioning of phase space into discrete macrostates, which can be problematic when we have a high-dimensional collective space or when little is known a priori about the molecular system. Hence, we developed a new WE-based method, called the "Concurrent Adaptive Sampling (CAS) algorithm," to tackle these issues. The CAS algorithm is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective variables and adaptive macrostates to enhance the sampling in the high-dimensional space. This is especially useful for systems in which we do not know what the right reaction coordinates are, in which case we can use many collective variables to sample conformations and pathways. In addition, a clustering technique based on the committor function is used to accelerate sampling the slowest process in the molecular system. In this paper, we introduce the new method and show results from two-dimensional models and bio-molecules, specifically penta-alanine and a triazine trimer.

  5. DoD Key Technologies Plan

    DTIC Science & Technology

    1992-07-01

    methodologies ; software performance analysis; software testing; and concurrent languages. Finally, efforts in algorithms, which are primarily designed to upgrade...These codes provide a powerful research tool for testing new concepts and designs prior to experimental implementation. DoE’s laser program has also...development, and specially designed production facilities. World leadership in bth non -fluorinated and fluorinated materials resides in the U.S. but Japan

  6. Real-Time Aggressive Image Data Compression

    DTIC Science & Technology

    1990-03-31

    implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression

  7. Context Aware Routing Management Architecture for Airborne Networks

    DTIC Science & Technology

    2012-03-22

    awareness, increased survivability, 2 higher operation tempo , greater lethality, improve speed of command and certain degree of self-synchronization [35...first two sets of experiments. This error model simulates deviations from predetermined routes as well as variations on signal strength for radio...routes computed using Maximum Concurrent Multi-Commodity flow algorithm are not susceptible to rapid topology variations induced by noise. 57 5

  8. Experiments with Test Case Generation and Runtime Analysis

    NASA Technical Reports Server (NTRS)

    Artho, Cyrille; Drusinsky, Doron; Goldberg, Allen; Havelund, Klaus; Lowry, Mike; Pasareanu, Corina; Rosu, Grigore; Visser, Willem; Koga, Dennis (Technical Monitor)

    2003-01-01

    Software testing is typically an ad hoc process where human testers manually write many test inputs and expected test results, perhaps automating their execution in a regression suite. This process is cumbersome and costly. This paper reports preliminary results on an approach to further automate this process. The approach consists of combining automated test case generation based on systematically exploring the program's input domain, with runtime analysis, where execution traces are monitored and verified against temporal logic specifications, or analyzed using advanced algorithms for detecting concurrency errors such as data races and deadlocks. The approach suggests to generate specifications dynamically per input instance rather than statically once-and-for-all. The paper describes experiments with variants of this approach in the context of two examples, a planetary rover controller and a space craft fault protection system.

  9. A ranking method for the concurrent learning of compounds with various activity profiles.

    PubMed

    Dörr, Alexander; Rosenbaum, Lars; Zell, Andreas

    2015-01-01

    In this study, we present a SVM-based ranking algorithm for the concurrent learning of compounds with different activity profiles and their varying prioritization. To this end, a specific labeling of each compound was elaborated in order to infer virtual screening models against multiple targets. We compared the method with several state-of-the-art SVM classification techniques that are capable of inferring multi-target screening models on three chemical data sets (cytochrome P450s, dehydrogenases, and a trypsin-like protease data set) containing three different biological targets each. The experiments show that ranking-based algorithms show an increased performance for single- and multi-target virtual screening. Moreover, compounds that do not completely fulfill the desired activity profile are still ranked higher than decoys or compounds with an entirely undesired profile, compared to other multi-target SVM methods. SVM-based ranking methods constitute a valuable approach for virtual screening in multi-target drug design. The utilization of such methods is most helpful when dealing with compounds with various activity profiles and the finding of many ligands with an already perfectly matching activity profile is not to be expected.

  10. A novel heterogeneous algorithm to simulate multiphase flow in porous media on multicore CPU-GPU systems

    NASA Astrophysics Data System (ADS)

    McClure, J. E.; Prins, J. F.; Miller, C. T.

    2014-07-01

    Multiphase flow implementations of the lattice Boltzmann method (LBM) are widely applied to the study of porous medium systems. In this work, we construct a new variant of the popular "color" LBM for two-phase flow in which a three-dimensional, 19-velocity (D3Q19) lattice is used to compute the momentum transport solution while a three-dimensional, seven velocity (D3Q7) lattice is used to compute the mass transport solution. Based on this formulation, we implement a novel heterogeneous GPU-accelerated algorithm in which the mass transport solution is computed by multiple shared memory CPU cores programmed using OpenMP while a concurrent solution of the momentum transport is performed using a GPU. The heterogeneous solution is demonstrated to provide speedup of 2.6 × as compared to multi-core CPU solution and 1.8 × compared to GPU solution due to concurrent utilization of both CPU and GPU bandwidths. Furthermore, we verify that the proposed formulation provides an accurate physical representation of multiphase flow processes and demonstrate that the approach can be applied to perform heterogeneous simulations of two-phase flow in porous media using a typical GPU-accelerated workstation.

  11. A unifying model of concurrent spatial and temporal modularity in muscle activity.

    PubMed

    Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien

    2014-02-01

    Modularity in the central nervous system (CNS), i.e., the brain capability to generate a wide repertoire of movements by combining a small number of building blocks ("modules"), is thought to underlie the control of movement. Numerous studies reported evidence for such a modular organization by identifying invariant muscle activation patterns across various tasks. However, previous studies relied on decompositions differing in both the nature and dimensionality of the identified modules. Here, we derive a single framework that encompasses all influential models of muscle activation modularity. We introduce a new model (named space-by-time decomposition) that factorizes muscle activations into concurrent spatial and temporal modules. To infer these modules, we develop an algorithm, referred to as sample-based nonnegative matrix trifactorization (sNM3F). We test the space-by-time decomposition on a comprehensive electromyographic dataset recorded during execution of arm pointing movements and show that it provides a low-dimensional yet accurate, highly flexible and task-relevant representation of muscle patterns. The extracted modules have a well characterized functional meaning and implement an efficient trade-off between replication of the original muscle patterns and task discriminability. Furthermore, they are compatible with the modules extracted from existing models, such as synchronous synergies and temporal primitives, and generalize time-varying synergies. Our results indicate the effectiveness of a simultaneous but separate condensation of spatial and temporal dimensions of muscle patterns. The space-by-time decomposition accommodates a unified view of the hierarchical mapping from task parameters to coordinated muscle activations, which could be employed as a reference framework for studying compositional motor control.

  12. Pattern identification in time-course gene expression data with the CoGAPS matrix factorization.

    PubMed

    Fertig, Elana J; Stein-O'Brien, Genevieve; Jaffe, Andrew; Colantuoni, Carlo

    2014-01-01

    Patterns in time-course gene expression data can represent the biological processes that are active over the measured time period. However, the orthogonality constraint in standard pattern-finding algorithms, including notably principal components analysis (PCA), confounds expression changes resulting from simultaneous, non-orthogonal biological processes. Previously, we have shown that Markov chain Monte Carlo nonnegative matrix factorization algorithms are particularly adept at distinguishing such concurrent patterns. One such matrix factorization is implemented in the software package CoGAPS. We describe the application of this software and several technical considerations for identification of age-related patterns in a public, prefrontal cortex gene expression dataset.

  13. ACMES: fast multiple-genome searches for short repeat sequences with concurrent cross-species information retrieval

    PubMed Central

    Reneker, Jeff; Shyu, Chi-Ren; Zeng, Peiyu; Polacco, Joseph C.; Gassmann, Walter

    2004-01-01

    We have developed a web server for the life sciences community to use to search for short repeats of DNA sequence of length between 3 and 10 000 bases within multiple species. This search employs a unique and fast hash function approach. Our system also applies information retrieval algorithms to discover knowledge of cross-species conservation of repeat sequences. Furthermore, we have incorporated a part of the Gene Ontology database into our information retrieval algorithms to broaden the coverage of the search. Our web server and tutorial can be found at http://acmes.rnet.missouri.edu. PMID:15215469

  14. A One-Versus-All Class Binarization Strategy for Bearing Diagnostics of Concurrent Defects

    PubMed Central

    Ng, Selina S. Y.; Tse, Peter W.; Tsui, Kwok L.

    2014-01-01

    In bearing diagnostics using a data-driven modeling approach, a concern is the need for data from all possible scenarios to build a practical model for all operating conditions. This paper is a study on bearing diagnostics with the concurrent occurrence of multiple defect types. The authors are not aware of any work in the literature that studies this practical problem. A strategy based on one-versus-all (OVA) class binarization is proposed to improve fault diagnostics accuracy while reducing the number of scenarios for data collection, by predicting concurrent defects from training data of normal and single defects. The proposed OVA diagnostic approach is evaluated with empirical analysis using support vector machine (SVM) and C4.5 decision tree, two popular classification algorithms frequently applied to system health diagnostics and prognostics. Statistical features are extracted from the time domain and the frequency domain. Prediction performance of the proposed strategy is compared with that of a simple multi-class classification, as well as that of random guess and worst-case classification. We have verified the potential of the proposed OVA diagnostic strategy in performance improvements for single-defect diagnosis and predictions of BPFO plus BPFI concurrent defects using two laboratory-collected vibration data sets. PMID:24419162

  15. A one-versus-all class binarization strategy for bearing diagnostics of concurrent defects.

    PubMed

    Ng, Selina S Y; Tse, Peter W; Tsui, Kwok L

    2014-01-13

    In bearing diagnostics using a data-driven modeling approach, a concern is the need for data from all possible scenarios to build a practical model for all operating conditions. This paper is a study on bearing diagnostics with the concurrent occurrence of multiple defect types. The authors are not aware of any work in the literature that studies this practical problem. A strategy based on one-versus-all (OVA) class binarization is proposed to improve fault diagnostics accuracy while reducing the number of scenarios for data collection, by predicting concurrent defects from training data of normal and single defects. The proposed OVA diagnostic approach is evaluated with empirical analysis using support vector machine (SVM) and C4.5 decision tree, two popular classification algorithms frequently applied to system health diagnostics and prognostics. Statistical features are extracted from the time domain and the frequency domain. Prediction performance of the proposed strategy is compared with that of a simple multi-class classification, as well as that of random guess and worst-case classification. We have verified the potential of the proposed OVA diagnostic strategy in performance improvements for single-defect diagnosis and predictions of BPFO plus BPFI concurrent defects using two laboratory-collected vibration data sets.

  16. Concurrent alcohol and tobacco use among a middle-aged and elderly population in Mumbai.

    PubMed

    Gupta, Prakash C; Maulik, Pallab K; Pednekar, Mangesh S; Saxena, Shekhar

    2005-01-01

    The concurrent use of alcohol and tobacco and its deleterious effects have been reported in the western literature. However, studies on the relationship between concurrent alcohol and tobacco use in India are limited. This study outlines the association between concurrent alcohol and tobacco use among a middle-aged and elderly population in a western Indian cohort after controlling for various sociodemographic factors. A total of 35 102 men, 45 years of age and above were interviewed for concurrent alcohol and tobacco use. The sample was part of an earlier cohort drawn from the general population. The data were analysed after controlling for age, education, religion and mother-tongue. Among alcohol users, 51.1% smoked tobacco and 35.6% used smokeless tobacco. The relative risk of alcohol use was highest among those smoking cigarettes or beedis and among those using mishri with betel quid and tobacco. The risk of alcohol use increased with the frequency of tobacco use. The risk also increased with higher amounts of alcohol consumption, but peaked at around 100-150 ml of absolute alcohol use. The study highlights the association between concurrent alcohol and tobacco use among the Indian population. This has important public health implications since concurrent use of these is synergistic for increased risk of oropharyngeal cancers.

  17. Automatic Verification of Serializers.

    DTIC Science & Technology

    1980-03-01

    31 2.5 Using semaphores to implement sei ;alizers ......................... 32 2.6 A comparison of...of concurrency control, while Hewitt has concentrated on more primitive control of concurrency in a context where programs communicate by passing...translation oflserializers into clusters and semaphores is given as a possible implementation strategy. Chapter 3 presents a simple semantic model that supl

  18. The Developmental Effect of Concurrent Cognitive and Locomotor Skills: Time-Sharing from a Dynamical Perspective.

    ERIC Educational Resources Information Center

    Whitall, Jill

    1991-01-01

    Presents research on the effects of concurrent verbal cognition on locomotor skills. Results revealed no interference with coordination variables across age, but some interference with control variables, particularly in younger subjects. Coordination of gait required less attention than setting of control parameters. This coordination was in place…

  19. Aging and Concurrent Task Performance: Cognitive Demand and Motor Control

    ERIC Educational Resources Information Center

    Albinet, Cedric; Tomporowski, Phillip D.; Beasman, Kathryn

    2006-01-01

    A motor task that requires fine control of upper limb movements and a cognitive task that requires executive processing--first performing them separately and then concurrently--was performed by 18 young and 18 older adults. The motor task required participants to tap alternatively on two targets, the sizes of which varied systematically. The…

  20. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    NASA Astrophysics Data System (ADS)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  1. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less

  2. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    DOE PAGES

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    2016-03-24

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less

  3. Designing of a technological line in the context of controlling with the use of integration of the virtual controller with the mechatronics concept designer module of the PLM Siemens NX software

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2017-08-01

    In the work is examined the sequential control system of a technological line in the form of the final part of a system of an internal transport. The process of designing this technological line using the computer-aided approach ran concurrently in two different program environments. In the Mechatronics Concept Designer module of the PLM Siemens NX software was developed the 3D model of the technological line prepared for verification the logic interrelations implemented in the control system. For this purpose, from the whole system of the technological line, it was distinguished the sub-system of actuators and sensors, because their correct operation determines the correct operation of the whole system. Whereas in the application of the virtual controller have been implemented the algorithms of work of the planned line. Then both program environments have been integrated using the OPC server, which enables the exchange of data between the considered systems. The data on the state of the object and the data defining the way and sequence of operation of the technological line are exchanged between the virtual controller and the 3D model of the technological line in real time.

  4. A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation.

    PubMed

    Leung, Chi-Sing; Wan, Wai Yan; Feng, Ruibin

    2017-06-01

    Many existing results on fault-tolerant algorithms focus on the single fault source situation, where a trained network is affected by one kind of weight failure. In fact, a trained network may be affected by multiple kinds of weight failure. This paper first studies how the open weight fault and the multiplicative weight noise degrade the performance of radial basis function (RBF) networks. Afterward, we define the objective function for training fault-tolerant RBF networks. Based on the objective function, we then develop two learning algorithms, one batch mode and one online mode. Besides, the convergent conditions of our online algorithm are investigated. Finally, we develop a formula to estimate the test set error of faulty networks trained from our approach. This formula helps us to optimize some tuning parameters, such as RBF width.

  5. Coarse-grained component concurrency in Earth system modeling: parallelizing atmospheric radiative transfer in the GFDL AM3 model using the Flexible Modeling System coupling framework

    NASA Astrophysics Data System (ADS)

    Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac

    2016-10-01

    Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.

  6. Efficacy and safety of concurrent chemoradiation with weekly cisplatin ± low-dose celecoxib in locally advanced undifferentiated nasopharyngeal carcinoma: a phase II-III clinical trial.

    PubMed

    Mohammadianpanah, Mohammad; Razmjou-Ghalaei, Sasan; Shafizad, Amin; Ashouri-Taziani, Yaghoub; Khademi, Bijan; Ahmadloo, Niloofar; Ansari, Mansour; Omidvari, Shapour; Mosalaei, Ahmad; Mosleh-Shirazi, Mohammad Amin

    2011-01-01

    This is the first study that aimed to determine the efficacy and safety of concurrent chemoradiation with weekly cisplatin ± celecoxib 100 mg twice daily in locally advanced undifferentiated nasopharyngeal carcinoma. Eligible patients had newly diagnosed locally advanced (T3-T4, and/or N2-N3, M0) undifferentiated nasopharyngeal carcinoma, no prior therapy, Karnofsky performance status ≥ 70, and normal organ function. The patients were assigned to receive 7 weeks concurrent chemoradiation (70 Gy) with weekly cisplatin 30 mg/m 2 with either celecoxib 100 mg twice daily, (study group, n = 26) or placebo (control group, n = 27) followed by adjuvant combined chemotherapy with cisplatin 70 mg/m 2 on day 1 plus 5-fluorouracil 750 mg/m 2 /d with 8-h infusion on days 1-3, 3-weekly for 3 cycles. Overall clinical response rate was 100% in both groups. Complete and partial clinical response rates were 64% and 36% in the study group and 44% and 56% in the control group, respectively (P > 0.25). The addition of celecoxib to concurrent chemoradiation was associated with improved 2-year locoregional control rate from 84% to 100% (P = 0.039). The addition of celecoxib 100 mg twice daily to concurrent chemoradiation improved 2-year locoregional control rate.

  7. Multi-jagged: A scalable parallel spatial partitioning algorithm

    DOE PAGES

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; ...

    2015-03-18

    Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less

  8. Urethral lymphogranuloma venereum infections in men with anorectal lymphogranuloma venereum and their partners: the missing link in the current epidemic?

    PubMed

    de Vrieze, Nynke Hesselina Neeltje; van Rooijen, Martijn; Speksnijder, Arjen Gerard Cornelis Lambertus; de Vries, Henry John C

    2013-08-01

    Urethral lymphogranuloma venereum (LGV) is not screened routinely. We found that in 341 men having sex with men with anorectal LGV, 7 (2.1%) had concurrent urethral LGV. Among 59 partners, 4 (6.8%) had urethral LGV infections. Urethral LGV is common, probably key in transmission, and missed in current routine LGV screening algorithms.

  9. A Comparison of the Concurrent and Predictive Validity of Three Measures of Readiness to Change Alcohol Use in a Clinical Sample of Adolescents

    ERIC Educational Resources Information Center

    Maisto, Stephen A.; Krenek, Marketa; Chung, Tammy; Martin, Christopher S.; Clark, Duncan; Cornelius, Jack

    2011-01-01

    The authors compared 3 measures of readiness to change alcohol use commonly used in clinical research and practice with adolescents: the Readiness Ruler, the SOCRATES (subscales of Recognition and Taking Steps), and a Staging Algorithm. The analysis sample consisted of 161 male and female adolescents presenting for intensive outpatient…

  10. Software Development Technologies for Reactive, Real-Time, and Hybrid Systems: Summary of Research

    NASA Technical Reports Server (NTRS)

    Manna, Zohar

    1998-01-01

    This research is directed towards the implementation of a comprehensive deductive-algorithmic environment (toolkit) for the development and verification of high assurance reactive systems, especially concurrent, real-time, and hybrid systems. For this, we have designed and implemented the STCP (Stanford Temporal Prover) verification system. Reactive systems have an ongoing interaction with their environment, and their computations are infinite sequences of states. A large number of systems can be seen as reactive systems, including hardware, concurrent programs, network protocols, and embedded systems. Temporal logic provides a convenient language for expressing properties of reactive systems. A temporal verification methodology provides procedures for proving that a given system satisfies a given temporal property. The research covered necessary theoretical foundations as well as implementation and application issues.

  11. Effortful Control and Impulsivity as Concurrent and Longitudinal Predictors of Academic Achievement

    ERIC Educational Resources Information Center

    Valiente, Carlos; Eisenberg, Nancy; Spinrad, Tracy L.; Haugen, Rg; Thompson, Marilyn S.; Kupfer, Anne

    2013-01-01

    The goal of this study was to test if both effortful control (EC) and impulsivity, a reactive index of temperament, uniquely predict adolescents' academic achievement, concurrently and longitudinally (Time 1: "N" = 168, X-bar[subscript age] = 12 years). At Time 1, parents and teachers reported on students' EC and impulsivity.…

  12. The Effects of a Concurrent Task on Human Optimization and Self Control

    ERIC Educational Resources Information Center

    Reed, Phil; Thompson, Caitlin; Osborne, Lisa A.; McHugh, Louise

    2011-01-01

    Memory deficits have been shown to hamper decision making in a number of populations. In two experiments, participants were required to select one of three alternatives that varied in reinforcer amount and delay, and the effect of a concurrent task on a behavioral choice task that involved making either an impulsive, self-controlled, or optimal…

  13. Multitasking the Davidson algorithm for the large, sparse eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umar, V.M.; Fischer, C.F.

    1989-01-01

    The authors report how the Davidson algorithm, developed for handling the eigenvalue problem for large and sparse matrices arising in quantum chemistry, was modified for use in atomic structure calculations. To date these calculations have used traditional eigenvalue methods, which limit the range of feasible calculations because of their excessive memory requirements and unsatisfactory performance attributed to time-consuming and costly processing of zero valued elements. The replacement of a traditional matrix eigenvalue method by the Davidson algorithm reduced these limitations. Significant speedup was found, which varied with the size of the underlying problem and its sparsity. Furthermore, the range ofmore » matrix sizes that can be manipulated efficiently was expended by more than one order or magnitude. On the CRAY X-MP the code was vectorized and the importance of gather/scatter analyzed. A parallelized version of the algorithm obtained an additional 35% reduction in execution time. Speedup due to vectorization and concurrency was also measured on the Alliant FX/8.« less

  14. Concurrent extensions to the FORTRAN language for parallel programming of computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Weeks, Cindy Lou

    1986-01-01

    Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.

  15. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less

  16. NSTX-U Advances in Real-Time C++11 on Linux

    NASA Astrophysics Data System (ADS)

    Erickson, Keith G.

    2015-08-01

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11 standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) will serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.

  17. A Multifactorial, Criteria-based Progressive Algorithm for Hamstring Injury Treatment.

    PubMed

    Mendiguchia, Jurdan; Martinez-Ruiz, Enrique; Edouard, Pascal; Morin, Jean-Benoît; Martinez-Martinez, Francisco; Idoate, Fernando; Mendez-Villanueva, Alberto

    2017-07-01

    Given the prevalence of hamstring injuries in football, a rehabilitation program that effectively promotes muscle tissue repair and functional recovery is paramount to minimize reinjury risk and optimize player performance and availability. This study aimed to assess the concurrent effectiveness of administering an individualized and multifactorial criteria-based algorithm (rehabilitation algorithm [RA]) on hamstring injury rehabilitation in comparison with using a general rehabilitation protocol (RP). Implementing a double-blind randomized controlled trial approach, two equal groups of 24 football players (48 total) completed either an RA group or a validated RP group 5 d after an acute hamstring injury. Within 6 months after return to sport, six hamstring reinjuries occurred in RP versus one injury in RA (relative risk = 6, 90% confidence interval = 1-35; clinical inference: very likely beneficial effect). The average duration of return to sport was possibly quicker (effect size = 0.34 ± 0.42) in RP (23.2 ± 11.7 d) compared with RA (25.5 ± 7.8 d) (-13.8%, 90% confidence interval = -34.0% to 3.4%; clinical inference: possibly small effect). At the time to return to sport, RA players showed substantially better 10-m time, maximal sprinting speed, and greater mechanical variables related to speed (i.e., maximum theoretical speed and maximal horizontal power) than the RP. Although return to sport was slower, male football players who underwent an individualized, multifactorial, criteria-based algorithm with a performance- and primary risk factor-oriented training program from the early stages of the process markedly decreased the risk of reinjury compared with a general protocol where long-length strength training exercises were prioritized.

  18. A nonrecursive order N preconditioned conjugate gradient: Range space formulation of MDOF dynamics

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.

    1990-01-01

    While excellent progress has been made in deriving algorithms that are efficient for certain combinations of system topologies and concurrent multiprocessing hardware, several issues must be resolved to incorporate transient simulation in the control design process for large space structures. Specifically, strategies must be developed that are applicable to systems with numerous degrees of freedom. In addition, the algorithms must have a growth potential in that they must also be amenable to implementation on forthcoming parallel system architectures. For mechanical system simulation, this fact implies that algorithms are required that induce parallelism on a fine scale, suitable for the emerging class of highly parallel processors; and transient simulation methods must be automatically load balancing for a wider collection of system topologies and hardware configurations. These problems are addressed by employing a combination range space/preconditioned conjugate gradient formulation of multi-degree-of-freedom dynamics. The method described has several advantages. In a sequential computing environment, the method has the features that: by employing regular ordering of the system connectivity graph, an extremely efficient preconditioner can be derived from the 'range space metric', as opposed to the system coefficient matrix; because of the effectiveness of the preconditioner, preliminary studies indicate that the method can achieve performance rates that depend linearly upon the number of substructures, hence the title 'Order N'; and the method is non-assembling. Furthermore, the approach is promising as a potential parallel processing algorithm in that the method exhibits a fine parallel granularity suitable for a wide collection of combinations of physical system topologies/computer architectures; and the method is easily load balanced among processors, and does not rely upon system topology to induce parallelism.

  19. Is It Really Self-Control? Examining the Predictive Power of the Delay of Gratification Task

    PubMed Central

    Duckworth, Angela L.; Tsukayama, Eli; Kirby, Teri A.

    2013-01-01

    This investigation tests whether the predictive power of the delay of gratification task (colloquially known as the “marshmallow test”) derives from its assessment of self-control or of theoretically unrelated traits. Among 56 school-age children in Study 1, delay time was associated with concurrent teacher ratings of self-control and Big Five conscientiousness—but not with other personality traits, intelligence, or reward-related impulses. Likewise, among 966 preschool children in Study 2, delay time was consistently associated with concurrent parent and caregiver ratings of self-control but not with reward-related impulses. While delay time in Study 2 was also related to concurrently measured intelligence, predictive relations with academic, health, and social outcomes in adolescence were more consistently explained by ratings of effortful control. Collectively, these findings suggest that delay task performance may be influenced by extraneous traits, but its predictive power derives primarily from its assessment of self-control. PMID:23813422

  20. Postural stability and the influence of concurrent muscle activation--Beneficial effects of jaw and fist clenching.

    PubMed

    Ringhof, Steffen; Leibold, Timo; Hellmann, Daniel; Stein, Thorsten

    2015-10-01

    Recent studies reported on the potential benefits of submaximum clenching of the jaw on human postural control in upright unperturbed stance. However, it remained unclear whether these effects might also be observed among active controls. The purpose of the present study, therefore, was to comparatively examine the influence of concurrent muscle activation in terms of submaximum clenching of the jaw and submaximum clenching of the fists on postural stability. Posturographic analyses were conducted with 17 healthy young adults on firm and foam surfaces while either clenching the jaw (JAW) or clenching the fists (FIST), whereas habitual standing served as the control condition (CON). Both submaximum tasks were performed at 25% maximum voluntary contraction, assessed, and visualized in real time by means of electromyography. Statistical analyses revealed that center of pressure (COP) displacements were significantly reduced during JAW and FIST, but with no differences between both concurrent clenching activities. Further, a significant increase in COP displacements was observed for the foam as compared to the firm condition. The results showed that concurrent muscle activation significantly improved postural stability compared with habitual standing, and thus emphasize the beneficial effects of jaw and fist clenching for static postural control. It is suggested that concurrent activities contribute to the facilitation of human motor excitability, finally increasing the neural drive to the distal muscles. Future studies should evaluate whether elderly or patients with compromised postural control might benefit from these physiological responses, e.g., in the form of a reduced risk of falling. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Inverse consistent non-rigid image registration based on robust point set matching

    PubMed Central

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889

  2. The stopping rules for winsorized tree

    NASA Astrophysics Data System (ADS)

    Ch'ng, Chee Keong; Mahat, Nor Idayu

    2017-11-01

    Winsorized tree is a modified tree-based classifier that is able to investigate and to handle all outliers in all nodes along the process of constructing the tree. It overcomes the tedious process of constructing a classical tree where the splitting of branches and pruning go concurrently so that the constructed tree would not grow bushy. This mechanism is controlled by the proposed algorithm. In winsorized tree, data are screened for identifying outlier. If outlier is detected, the value is neutralized using winsorize approach. Both outlier identification and value neutralization are executed recursively in every node until predetermined stopping criterion is met. The aim of this paper is to search for significant stopping criterion to stop the tree from further splitting before overfitting. The result obtained from the conducted experiment on pima indian dataset proved that the node could produce the final successor nodes (leaves) when it has achieved the range of 70% in information gain.

  3. Simultaneous retrieval of sea ice thickness and snow depth using concurrent active altimetry and passive L-band remote sensing data

    NASA Astrophysics Data System (ADS)

    Zhou, L.; Xu, S.; Liu, J.

    2017-12-01

    The retrieval of sea ice thickness mainly relies on satellite altimetry, and the freeboard measurements are converted to sea ice thickness (hi) under certain assumptions over snow loading. The uncertain in snow depth (hs) is a major source of uncertainty in the retrieved sea ice thickness and total volume for both radar and laser altimetry. In this study, novel algorithms for the simultaneous retrieval of hi and hs are proposed for the data synergy of L-band (1.4 GHz) passive remote sensing and both types of active altimetry: (1) L-band (1.4GHz) brightness temperature (TB) from Soil Moisture Ocean Salinity (SMOS) satellite and sea ice freeboard (FBice) from radar altimetry, (2) L-band TB data and snow freeboard (FBsnow) from laser altimetry. Two physical models serve as the forward models for the retrieval: L-band radiation model, and the hydrostatic equilibrium model. Verification with SMOS and Operational IceBridge (OIB) data is carried out, showing overall good retrieval accuracy for both sea ice parameters. Specifically, we show that the covariability between hs and FBsnow is crucial for the synergy between TB and FBsnow. Comparison with existing algorithms shows lower uncertainty in both sea ice parameters, and that the uncertainty in the retrieved sea ice thickness as caused by that of snow depth is spatially uncorrelated, with the potential reduction of the volume uncertainty through spatial sampling. The proposed algorithms can be applied to the retrieval of sea ice parameters at basin-scale, using concurrent active and passive remote sensing data based on satellites.

  4. Impact of high-intensity concurrent training on cardiovascular risk factors in persons with multiple sclerosis - pilot study.

    PubMed

    Keytsman, Charly; Hansen, Dominique; Wens, Inez; O Eijnde, Bert

    2017-10-27

    High-intensity concurrent training positively affects cardiovascular risk factors. Because this was never investigated in multiple sclerosis, the present pilot study explored the impact of this training on cardiovascular risk factors in this population. Before and after 12 weeks of high-intense concurrent training (interval and strength training, 5 sessions per 2 weeks, n = 16) body composition, resting blood pressure and heart rate, 2-h oral glucose tolerance (insulin sensitivity, glycosylated hemoglobin, blood glucose and insulin concentrations), blood lipids (high- and low-density lipoprotein, total cholesterol, triglyceride levels) and C-reactive protein were analyzed. Twelve weeks of high-intense concurrent training significantly improved resting heart rate (-6%), 2-h blood glucose concentrations (-13%) and insulin sensitivity (-24%). Blood pressure, body composition, blood lipids and C-reactive protein did not seem to be affected. Under the conditions of this pilot study, 12 weeks of concurrent high-intense interval and strength training improved resting heart rate, 2-h glucose and insulin sensitivity in multiple sclerosis but did not affect blood C-reactive protein levels, blood pressure, body composition and blood lipid profiles. Further, larger and controlled research investigating the effects of high-intense concurrent training on cardiovascular risk factors in multiple sclerosis is warranted. Implications for rehabilitation High-intensity concurrent training improves cardiovascular fitness. This pilot study explores the impact of this training on cardiovascular risk factors in multiple sclerosis. Despite the lack of a control group, high-intense concurrent training does not seem to improve cardiovascular risk factors in multiple sclerosis.

  5. An efficient parallel termination detection algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Ofmore » these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.« less

  6. An algebra of discrete event processes

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Meyer, George

    1991-01-01

    This report deals with an algebraic framework for modeling and control of discrete event processes. The report consists of two parts. The first part is introductory, and consists of a tutorial survey of the theory of concurrency in the spirit of Hoare's CSP, and an examination of the suitability of such an algebraic framework for dealing with various aspects of discrete event control. To this end a new concurrency operator is introduced and it is shown how the resulting framework can be applied. It is further shown that a suitable theory that deals with the new concurrency operator must be developed. In the second part of the report the formal algebra of discrete event control is developed. At the present time the second part of the report is still an incomplete and occasionally tentative working paper.

  7. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted

    1990-01-01

    Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chrisochoides, N.; Sukup, F.

    In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less

  9. Exploiting Multiple Levels of Parallelism in Sparse Matrix-Matrix Multiplication

    DOE PAGES

    Azad, Ariful; Ballard, Grey; Buluc, Aydin; ...

    2016-11-08

    Sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. The scaling of existing parallel implementations of SpGEMM is heavily bound by communication. Even though 3D (or 2.5D) algorithms have been proposed and theoretically analyzed in the flat MPI model on Erdös-Rényi matrices, those algorithms had not been implemented in practice and their complexities had not been analyzed for the general case. In this work, we present the first implementation of the 3D SpGEMM formulation that exploits multiple (intranode and internode) levels of parallelism, achievingmore » significant speedups over the state-of-the-art publicly available codes at all levels of concurrencies. We extensively evaluate our implementation and identify bottlenecks that should be subject to further research.« less

  10. A hybrid multi-objective evolutionary algorithm for wind-turbine blade optimization

    NASA Astrophysics Data System (ADS)

    Sessarego, M.; Dixon, K. R.; Rival, D. E.; Wood, D. H.

    2015-08-01

    A concurrent-hybrid non-dominated sorting genetic algorithm (hybrid NSGA-II) has been developed and applied to the simultaneous optimization of the annual energy production, flapwise root-bending moment and mass of the NREL 5 MW wind-turbine blade. By hybridizing a multi-objective evolutionary algorithm (MOEA) with gradient-based local search, it is believed that the optimal set of blade designs could be achieved in lower computational cost than for a conventional MOEA. To measure the convergence between the hybrid and non-hybrid NSGA-II on a wind-turbine blade optimization problem, a computationally intensive case was performed using the non-hybrid NSGA-II. From this particular case, a three-dimensional surface representing the optimal trade-off between the annual energy production, flapwise root-bending moment and blade mass was achieved. The inclusion of local gradients in the blade optimization, however, shows no improvement in the convergence for this three-objective problem.

  11. Scanning wind-vector scatterometers with two pencil beams

    NASA Technical Reports Server (NTRS)

    Kirimoto, T.; Moore, R. K.

    1984-01-01

    A scanning pencil-beam scatterometer for ocean windvector determination has potential advantages over the fan-beam systems used and proposed heretofore. The pencil beam permits use of lower transmitter power, and at the same time allows concurrent use of the reflector by a radiometer to correct for atmospheric attenuation and other radiometers for other purposes. The use of dual beams based on the same scanning reflector permits four looks at each cell on the surface, thereby improving accuracy and allowing alias removal. Simulation results for a spaceborne dual-beam scanning scatterometer with a 1-watt radiated power at an orbital altitude of 900 km is described. Two novel algorithms for removing the aliases in the windvector are described, in addition to an adaptation of the conventional maximum likelihood algorithm. The new algorithms are more effective at alias removal than the conventional one. Measurement errors for the wind speed, assuming perfect alias removal, were found to be less than 10%.

  12. Semiannual Report for Contract NAS1-19480 (Institute for Computer Applications in Science and Engineering)

    DTIC Science & Technology

    1994-06-01

    algorithms for large, irreducibly coupled systems iteratively solve concurrent problems within different subspaces of a Hilbert space, or within different...effective on problems amenable to SIMD solution. Together with researchers at AT&T Bell Labs (Boris Lubachevsky, Albert Greenberg ) we have developed...reasonable measurement. In the study of different speedups, various causes of superlinear speedup are also presented. Greenberg , Albert G., Boris D

  13. A Bayesian additive model for understanding public transport usage in special events.

    PubMed

    Rodrigues, Filipe; Borysov, Stanislav; Ribeiro, Bernardete; Pereira, Francisco

    2016-12-02

    Public special events, like sports games, concerts and festivals are well known to create disruptions in transportation systems, often catching the operators by surprise. Although these are usually planned well in advance, their impact is difficult to predict, even when organisers and transportation operators coordinate. The problem highly increases when several events happen concurrently. To solve these problems, costly processes, heavily reliant on manual search and personal experience, are usual practice in large cities like Singapore, London or Tokyo. This paper presents a Bayesian additive model with Gaussian process components that combines smart card records from public transport with context information about events that is continuously mined from the Web. We develop an efficient approximate inference algorithm using expectation propagation, which allows us to predict the total number of public transportation trips to the special event areas, thereby contributing to a more adaptive transportation system. Furthermore, for multiple concurrent event scenarios, the proposed algorithm is able to disaggregate gross trip counts into their most likely components related to specific events and routine behavior. Using real data from Singapore, we show that the presented model outperforms the best baseline model by up to 26% in R2 and also has explanatory power for its individual components.

  14. General Multimechanism Reversible-Irreversible Time-Dependent Constitutive Deformation Model Being Developed

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Arnold, Steven M.

    2001-01-01

    Since most advanced material systems (for example metallic-, polymer-, and ceramic-based systems) being currently researched and evaluated are for high-temperature airframe and propulsion system applications, the required constitutive models must account for both reversible and irreversible time-dependent deformations. Furthermore, since an integral part of continuum-based computational methodologies (be they microscale- or macroscale-based) is an accurate and computationally efficient constitutive model to describe the deformation behavior of the materials of interest, extensive research efforts have been made over the years on the phenomenological representations of constitutive material behavior in the inelastic analysis of structures. From a more recent and comprehensive perspective, the NASA Glenn Research Center in conjunction with the University of Akron has emphasized concurrently addressing three important and related areas: that is, 1) Mathematical formulation; 2) Algorithmic developments for updating (integrating) the external (e.g., stress) and internal state variables; 3) Parameter estimation for characterizing the model. This concurrent perspective to constitutive modeling has enabled the overcoming of the two major obstacles to fully utilizing these sophisticated time-dependent (hereditary) constitutive models in practical engineering analysis. These obstacles are: 1) Lack of efficient and robust integration algorithms; 2) Difficulties associated with characterizing the large number of required material parameters, particularly when many of these parameters lack obvious or direct physical interpretations.

  15. a Novel Approach of Indexing and Retrieving Spatial Polygons for Efficient Spatial Region Queries

    NASA Astrophysics Data System (ADS)

    Zhao, J. H.; Wang, X. Z.; Wang, F. Y.; Shen, Z. H.; Zhou, Y. C.; Wang, Y. L.

    2017-10-01

    Spatial region queries are more and more widely used in web-based applications. Mechanisms to provide efficient query processing over geospatial data are essential. However, due to the massive geospatial data volume, heavy geometric computation, and high access concurrency, it is difficult to get response in real time. Spatial indexes are usually used in this situation. In this paper, based on k-d tree, we introduce a distributed KD-Tree (DKD-Tree) suitbable for polygon data, and a two-step query algorithm. The spatial index construction is recursive and iterative, and the query is an in memory process. Both the index and query methods can be processed in parallel, and are implemented based on HDFS, Spark and Redis. Experiments on a large volume of Remote Sensing images metadata have been carried out, and the advantages of our method are investigated by comparing with spatial region queries executed on PostgreSQL and PostGIS. Results show that our approach not only greatly improves the efficiency of spatial region query, but also has good scalability, Moreover, the two-step spatial range query algorithm can also save cluster resources to support a large number of concurrent queries. Therefore, this method is very useful when building large geographic information systems.

  16. Enhacements to the TTS-502 time transfer system

    NASA Astrophysics Data System (ADS)

    Vandierendonck, A. J.; Hua, Q. D.

    1985-04-01

    Two years ago STI introduced an affordable, relatively compact time transfer system on the market -- the TTS-502, and described that system at the 1981 PTTI conference. Over the past few months, that system has been improved, and new features have been added. In addition, new options have been made available to further enhance the capabilities of the system. These enhancements include the addition of a positioning algorithm and new options providing a corrected 5 MHz output that is phase coherent with the 1 pps output, and providing an internal Rubidium Oscillator. The Positioning Algorithm was developed because not all time transfer users had the luxury of the Defense Mapping Agency's (DMA) services for determining their position in WGS-72 coordinates. The enhanced TTS-502 determines the GPS position anywhere in the world, independent of how many GPS satellites are concurrently visible. However, convergence time to a solution is inversely proportional to the number of satellites concurrently visible and the quality of frequency standard used in conjunction with the TTS-502. Real World solution results will be presented for a variety of cases and satellite scheduling scenarios. Typically, positioning accuracies were achieved better than 5 to 10 meters r.s.s. using the C/A code only at Sunnyvale, California.

  17. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,”more » has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer« less

  18. One too many diabetes: the combination of hyperglycaemic hyperosmolar state and central diabetes insipidus.

    PubMed

    Burmazovic, Snezana; Henzen, Christoph; Brander, Lukas; Cioccari, Luca

    2018-01-01

    The combination of hyperosmolar hyperglycaemic state and central diabetes insipidus is unusual and poses unique diagnostic and therapeutic challenges for clinicians. In a patient with diabetes mellitus presenting with polyuria and polydipsia, poor glycaemic control is usually the first aetiology that is considered, and achieving glycaemic control remains the first course of action. However, severe hypernatraemia, hyperglycaemia and discordance between urine-specific gravity and urine osmolality suggest concurrent symptomatic diabetes insipidus. We report a rare case of concurrent manifestation of hyperosmolar hyperglycaemic state and central diabetes insipidus in a patient with a history of craniopharyngioma. In patients with diabetes mellitus presenting with polyuria and polydipsia, poor glycaemic control is usually the first aetiology to be considered.However, a history of craniopharyngioma, severe hypernatraemia, hyperglycaemia and discordance between urine-specific gravity and osmolality provide evidence of concurrent diabetes insipidus.Therefore, if a patient with diabetes mellitus presents with severe hypernatraemia, hyperglycaemia, a low or low normal urinary-specific gravity and worsening polyuria despite correction of hyperglycaemia, concurrent diabetes insipidus should be sought.

  19. One too many diabetes: the combination of hyperglycaemic hyperosmolar state and central diabetes insipidus

    PubMed Central

    Burmazovic, Snezana; Henzen, Christoph; Brander, Lukas; Cioccari, Luca

    2018-01-01

    Summary The combination of hyperosmolar hyperglycaemic state and central diabetes insipidus is unusual and poses unique diagnostic and therapeutic challenges for clinicians. In a patient with diabetes mellitus presenting with polyuria and polydipsia, poor glycaemic control is usually the first aetiology that is considered, and achieving glycaemic control remains the first course of action. However, severe hypernatraemia, hyperglycaemia and discordance between urine-specific gravity and urine osmolality suggest concurrent symptomatic diabetes insipidus. We report a rare case of concurrent manifestation of hyperosmolar hyperglycaemic state and central diabetes insipidus in a patient with a history of craniopharyngioma. Learning points: In patients with diabetes mellitus presenting with polyuria and polydipsia, poor glycaemic control is usually the first aetiology to be considered. However, a history of craniopharyngioma, severe hypernatraemia, hyperglycaemia and discordance between urine-specific gravity and osmolality provide evidence of concurrent diabetes insipidus. Therefore, if a patient with diabetes mellitus presents with severe hypernatraemia, hyperglycaemia, a low or low normal urinary-specific gravity and worsening polyuria despite correction of hyperglycaemia, concurrent diabetes insipidus should be sought. PMID:29675260

  20. Concurrent diphtheria and infectious mononucleosis: difficulties for management, investigation and control of diphtheria in developing countries.

    PubMed

    Mattos-Guaraldi, A L; Damasco, P V; Gomes, D L R; Melendez, M G; Santos, L S; Marinelli, R S; Napoleão, F; Sabbadini, P S; Santos, C S; Moreira, L O; Hirata, R

    2011-11-01

    We report a case of concurrent diphtheria and infectious mononucleosis in an 11-year-old Brazilian child. Two days after specific treatment for diphtheria was started the patient was discharged following clinical recovery. This case highlights the difficulties in the clinical diagnosis of diphtheria in partially immunized individuals, and for the management and control of diphtheria in developing countries.

  1. A parallel expert system for the control of a robotic air vehicle

    NASA Technical Reports Server (NTRS)

    Shakley, Donald; Lamont, Gary B.

    1988-01-01

    Expert systems can be used to govern the intelligent control of vehicles, for example the Robotic Air Vehicle (RAV). Due to the nature of the RAV system the associated expert system needs to perform in a demanding real-time environment. The use of a parallel processing capability to support the associated expert system's computational requirement is critical in this application. Thus, algorithms for parallel real-time expert systems must be designed, analyzed, and synthesized. The design process incorporates a consideration of the rule-set/face-set size along with representation issues. These issues are looked at in reference to information movement and various inference mechanisms. Also examined is the process involved with transporting the RAV expert system functions from the TI Explorer, where they are implemented in the Automated Reasoning Tool (ART), to the iPSC Hypercube, where the system is synthesized using Concurrent Common LISP (CCLISP). The transformation process for the ART to CCLISP conversion is described. The performance characteristics of the parallel implementation of these expert systems on the iPSC Hypercube are compared to the TI Explorer implementation.

  2. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  3. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  4. Enhanced Quality Control in Pharmaceutical Applications by Combining Raman Spectroscopy and Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Martinez, J. C.; Guzmán-Sepúlveda, J. R.; Bolañoz Evia, G. R.; Córdova, T.; Guzmán-Cabrera, R.

    2018-06-01

    In this work, we applied machine learning techniques to Raman spectra for the characterization and classification of manufactured pharmaceutical products. Our measurements were taken with commercial equipment, for accurate assessment of variations with respect to one calibrated control sample. Unlike the typical use of Raman spectroscopy in pharmaceutical applications, in our approach the principal components of the Raman spectrum are used concurrently as attributes in machine learning algorithms. This permits an efficient comparison and classification of the spectra measured from the samples under study. This also allows for accurate quality control as all relevant spectral components are considered simultaneously. We demonstrate our approach with respect to the specific case of acetaminophen, which is one of the most widely used analgesics in the market. In the experiments, commercial samples from thirteen different laboratories were analyzed and compared against a control sample. The raw data were analyzed based on an arithmetic difference between the nominal active substance and the measured values in each commercial sample. The principal component analysis was applied to the data for quantitative verification (i.e., without considering the actual concentration of the active substance) of the difference in the calibrated sample. Our results show that by following this approach adulterations in pharmaceutical compositions can be clearly identified and accurately quantified.

  5. Executive control of stimulus-driven and goal-directed attention in visual working memory.

    PubMed

    Hu, Yanmei; Allen, Richard J; Baddeley, Alan D; Hitch, Graham J

    2016-10-01

    We examined the role of executive control in stimulus-driven and goal-directed attention in visual working memory using probed recall of a series of objects, a task that allows study of the dynamics of storage through analysis of serial position data. Experiment 1 examined whether executive control underlies goal-directed prioritization of certain items within the sequence. Instructing participants to prioritize either the first or final item resulted in improved recall for these items, and an increase in concurrent task difficulty reduced or abolished these gains, consistent with their dependence on executive control. Experiment 2 examined whether executive control is also involved in the disruption caused by a post-series visual distractor (suffix). A demanding concurrent task disrupted memory for all items except the most recent, whereas a suffix disrupted only the most recent items. There was no interaction when concurrent load and suffix were combined, suggesting that deploying selective attention to ignore the distractor did not draw upon executive resources. A final experiment replicated the independent interfering effects of suffix and concurrent load while ruling out possible artifacts. We discuss the results in terms of a domain-general episodic buffer in which information is retained in a transient, limited capacity privileged state, influenced by both stimulus-driven and goal-directed processes. The privileged state contains the most recent environmental input together with goal-relevant representations being actively maintained using executive resources.

  6. Stroop proactive control and task conflict are modulated by concurrent working memory load.

    PubMed

    Kalanthroff, Eyal; Avnit, Amir; Henik, Avishai; Davelaar, Eddy J; Usher, Marius

    2015-06-01

    Performance on the Stroop task reflects two types of conflict-informational (between the incongruent word and font color) and task (between the contextually relevant color-naming task and the irrelevant, but automatic, word-reading task). According to the dual mechanisms of control theory (DMC; Braver, 2012), variability in Stroop performance can result from variability in the deployment of a proactive task-demand control mechanism. Previous research has shown that when proactive control (PC) is diminished, both increased Stroop interference and a reversed Stroop facilitation (RF) are observed. Although the current DMC model accounts for the former effect, it does not predict the observed RF, which is considered to be behavioral evidence for task conflict in the Stroop task. Here we expanded the DMC model to account for Stroop RF. Assuming that a concurrent working memory (WM) task reduces PC, we predicted both increased interference and an RF. Nineteen participants performed a standard Stroop task combined with a concurrent n-back task, which was aimed at reducing available WM resources, and thus overloading PC. Although the results indicated common Stroop interference and facilitation in the low-load condition (zero-back), in the high-load condition (two-back), both increased Stroop interference and RF were observed, consistent with the model's prediction. These findings indicate that PC is modulated by concurrent WM load and serves as a common control mechanism for both informational and task Stroop conflicts.

  7. Requiem for a Data Base System.

    DTIC Science & Technology

    1979-01-18

    were defined -- - 2) the final syntax and semantics of QUEL were defined 3) protection was figured out 14) EQUEL was designed 5) concurrency control and...features which were not thought about in the initial design (such as concurrency control and recovery) and began worrying about distributed data...made in progress rather than on eventual corrections. Some attention is also given to the role of structured design in a data base system implementation

  8. ROSA: Distributed Joint Routing and Dynamic Spectrum Allocation in Cognitive Radio Ad Hoc Networks

    DTIC Science & Technology

    2010-03-01

    Aug. 1999. [20] I. N. Psaromiligkos and S. N. Batalama. Rapid Combined Synchronization/Demodulation Structures for DS - CDMA Systems - Part II: Finite...Medley. Rapid Combined Synchronization/Demodulation Structures for DS - CDMA Systems - Part I: Algorithmic developments. IEEE Transactions on...multiple access ( CDMA ) [21][20] al- low concurrent co-located communications so that a message from node i to node j can be correctly received even if

  9. Concurrent topology optimization for minimization of total mass considering load-carrying capabilities and thermal insulation simultaneously

    NASA Astrophysics Data System (ADS)

    Long, Kai; Wang, Xuan; Gu, Xianguang

    2017-09-01

    The present work introduces a novel concurrent optimization formulation to meet the requirements of lightweight design and various constraints simultaneously. Nodal displacement of macrostructure and effective thermal conductivity of microstructure are regarded as the constraint functions, which means taking into account both the load-carrying capabilities and the thermal insulation properties. The effective properties of porous material derived from numerical homogenization are used for macrostructural analysis. Meanwhile, displacement vectors of macrostructures from original and adjoint load cases are used for sensitivity analysis of the microstructure. Design variables in the form of reciprocal functions of relative densities are introduced and used for linearization of the constraint function. The objective function of total mass is approximately expressed by the second order Taylor series expansion. Then, the proposed concurrent optimization problem is solved using a sequential quadratic programming algorithm, by splitting into a series of sub-problems in the form of the quadratic program. Finally, several numerical examples are presented to validate the effectiveness of the proposed optimization method. The various effects including initial designs, prescribed limits of nodal displacement, and effective thermal conductivity on optimized designs are also investigated. An amount of optimized macrostructures and their corresponding microstructures are achieved.

  10. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  11. Oceanographic applications of laser technology

    NASA Technical Reports Server (NTRS)

    Hoge, F. E.

    1988-01-01

    Oceanographic activities with the Airborne Oceanographic Lidar (AOL) for the past several years have primarily been focussed on using active (laser induced pigment fluorescence) and concurrent passive ocean color spectra to improve existing ocean color algorithms for estimating primary production in the world's oceans. The most significant results were the development of a technique for selecting optimal passive wavelengths for recovering phytoplankton photopigment concentration and the application of this technique, termed active-passive correlation spectroscopy (APCS), to various forms of passive ocean color algorithms. Included in this activity is use of airborne laser and passive ocean color for development of advanced satellite ocean color sensors. Promising on-wavelength subsurface scattering layer measurements were recently obtained. A partial summary of these results are shown.

  12. An Approach for Peptide Identification by De Novo Sequencing of Mixture Spectra.

    PubMed

    Liu, Yi; Ma, Bin; Zhang, Kaizhong; Lajoie, Gilles

    2017-01-01

    Mixture spectra occur quite frequently in a typical wet-lab mass spectrometry experiment, which result from the concurrent fragmentation of multiple precursors. The ability to efficiently and confidently identify mixture spectra is essential to alleviate the existent bottleneck of low mass spectra identification rate. However, most of the traditional computational methods are not suitable for interpreting mixture spectra, because they still take the assumption that the acquired spectra come from the fragmentation of a single precursor. In this manuscript, we formulate the mixture spectra de novo sequencing problem mathematically, and propose a dynamic programming algorithm for the problem. Additionally, we use both simulated and real mixture spectra data sets to verify the merits of the proposed algorithm.

  13. D Reconstruction with a Collaborative Approach Based on Smartphones and a Cloud-Based Server

    NASA Astrophysics Data System (ADS)

    Nocerino, E.; Poiesi, F.; Locher, A.; Tefera, Y. T.; Remondino, F.; Chippendale, P.; Van Gool, L.

    2017-11-01

    The paper presents a collaborative image-based 3D reconstruction pipeline to perform image acquisition with a smartphone and geometric 3D reconstruction on a server during concurrent or disjoint acquisition sessions. Images are selected from the video feed of the smartphone's camera based on their quality and novelty. The smartphone's app provides on-the-fly reconstruction feedback to users co-involved in the acquisitions. The server is composed of an incremental SfM algorithm that processes the received images by seamlessly merging them into a single sparse point cloud using bundle adjustment. Dense image matching algorithm can be lunched to derive denser point clouds. The reconstruction details, experiments and performance evaluation are presented and discussed.

  14. Brief report on the United States Food and Drug Administration Blood Products Advisory Committee recommendations for management of donors and units testing positive for hepatitis B virus DNA.

    PubMed

    Lucey, C

    2006-11-01

    This article briefly recounts the 21st July 2005, Blood Products Advisory Committee (BPAC) meeting concerning recommendations for management of donors and units testing positive for hepatitis B virus (HBV) DNA. The author attended the meeting. The United States Food and Drug Administration (FDA) web site was used for meeting materials, and handouts were collected at the meeting to provide narrative information. Two European experts assisted with HBV subject matter. The proceedings of the advisory committee, the issue briefing materials, and testing algorithms are presented. BPAC voted concurrence with the FDA algorithm for Management of Donors and Units Testing Positive for Hepatitis B Virus DNA.

  15. Fixing convergence of Gaussian belief propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jason K; Bickson, Danny; Dolev, Danny

    Gaussian belief propagation (GaBP) is an iterative message-passing algorithm for inference in Gaussian graphical models. It is known that when GaBP converges it converges to the correct MAP estimate of the Gaussian random vector and simple sufficient conditions for its convergence have been established. In this paper we develop a double-loop algorithm for forcing convergence of GaBP. Our method computes the correct MAP estimate even in cases where standard GaBP would not have converged. We further extend this construction to compute least-squares solutions of over-constrained linear systems. We believe that our construction has numerous applications, since the GaBP algorithm ismore » linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. As a case study, we discuss the linear detection problem. We show that using our new construction, we are able to force convergence of Montanari's linear detection algorithm, in cases where it would originally fail. As a consequence, we are able to increase significantly the number of users that can transmit concurrently.« less

  16. Estimation of suspended particulate matter in turbid coastal waters: application to hyperspectral satellite imagery.

    PubMed

    Zhao, Jun; Cao, Wenxi; Xu, Zhantang; Ye, Haibin; Yang, Yuezhong; Wang, Guifen; Zhou, Wen; Sun, Zhaohua

    2018-04-16

    An empirical algorithm is proposed to estimate suspended particulate matter (SPM) ranging from 0.675 to 25.7 mg L -1 in the turbid Pearl River estuary (PRE). Comparisons between model predicted and in situ measured SPM resulted in R 2 s of 0.97 and 0.88 and mean absolute percentage errors (MAPEs) of 23.96% and 29.69% by using the calibration and validation data sets, respectively. The developed algorithm demonstrated the highest accuracy when compared with existing ones for turbid coastal waters. The diurnal dynamics of SPM was revealed by applying the proposed algorithm to reflectance data collected by a moored buoy in the PRE. The established algorithm was implemented to Hyperspectral Imager for the Coastal Ocean (HICO) data and the distribution pattern of SPM in the PRE was elucidated. Validation of HICO-derived reflectance data by using concurrent MODIS/Aqua data as a benchmark indicated their reliability. Factors influencing variability of SPM in the PRE were analyzed, which implicated the combined effects of wind, tide, rainfall, and circulation as the cause.

  17. Computational Discovery of Materials Using the Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Avendaño-Franco, Guillermo; Romero, Aldo

    Our current ability to model physical phenomena accurately, the increase computational power and better algorithms are the driving forces behind the computational discovery and design of novel materials, allowing for virtual characterization before their realization in the laboratory. We present the implementation of a novel firefly algorithm, a population-based algorithm for global optimization for searching the structure/composition space. This novel computation-intensive approach naturally take advantage of concurrency, targeted exploration and still keeping enough diversity. We apply the new method in both periodic and non-periodic structures and we present the implementation challenges and solutions to improve efficiency. The implementation makes use of computational materials databases and network analysis to optimize the search and get insights about the geometric structure of local minima on the energy landscape. The method has been implemented in our software PyChemia, an open-source package for materials discovery. We acknowledge the support of DMREF-NSF 1434897 and the Donors of the American Chemical Society Petroleum Research Fund for partial support of this research under Contract 54075-ND10.

  18. Two-voice fundamental frequency estimation

    NASA Astrophysics Data System (ADS)

    de Cheveigné, Alain

    2002-05-01

    An algorithm is presented that estimates the fundamental frequencies of two concurrent voices or instruments. The algorithm models each voice as a periodic function of time, and jointly estimates both periods by cancellation according to a previously proposed method [de Cheveigné and Kawahara, Speech Commun. 27, 175-185 (1999)]. The new algorithm improves on the old in several respects; it allows an unrestricted search range, effectively avoids harmonic and subharmonic errors, is more accurate (it uses two-dimensional parabolic interpolation), and is computationally less costly. It remains subject to unavoidable errors when periods are in certain simple ratios and the task is inherently ambiguous. The algorithm is evaluated on a small database including speech, singing voice, and instrumental sounds. It can be extended in several ways; to decide the number of voices, to handle amplitude variations, and to estimate more than two voices (at the expense of increased processing cost and decreased reliability). It makes no use of instrument models, learned or otherwise, although it could usefully be combined with such models. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.

  19. GOClonto: an ontological clustering approach for conceptualizing PubMed abstracts.

    PubMed

    Zheng, Hai-Tao; Borchert, Charles; Kim, Hong-Gee

    2010-02-01

    Concurrent with progress in biomedical sciences, an overwhelming of textual knowledge is accumulating in the biomedical literature. PubMed is the most comprehensive database collecting and managing biomedical literature. To help researchers easily understand collections of PubMed abstracts, numerous clustering methods have been proposed to group similar abstracts based on their shared features. However, most of these methods do not explore the semantic relationships among groupings of documents, which could help better illuminate the groupings of PubMed abstracts. To address this issue, we proposed an ontological clustering method called GOClonto for conceptualizing PubMed abstracts. GOClonto uses latent semantic analysis (LSA) and gene ontology (GO) to identify key gene-related concepts and their relationships as well as allocate PubMed abstracts based on these key gene-related concepts. Based on two PubMed abstract collections, the experimental results show that GOClonto is able to identify key gene-related concepts and outperforms the STC (suffix tree clustering) algorithm, the Lingo algorithm, the Fuzzy Ants algorithm, and the clustering based TRS (tolerance rough set) algorithm. Moreover, the two ontologies generated by GOClonto show significant informative conceptual structures.

  20. Bidirectional reflectance function in coastal waters: modeling and validation

    NASA Astrophysics Data System (ADS)

    Gilerson, Alex; Hlaing, Soe; Harmel, Tristan; Tonizzo, Alberto; Arnone, Robert; Weidemann, Alan; Ahmed, Samir

    2011-11-01

    The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms, specifically tuned for typical coastal waters and other case 2 conditions, are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multi- and hyperspectral radiometers which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths.

  1. Application of an optimization algorithm to satellite ocean color imagery: A case study in Southwest Florida coastal waters

    NASA Astrophysics Data System (ADS)

    Hu, Chuanmin; Lee, Zhongping; Muller-Karger, Frank E.; Carder, Kendall L.

    2003-05-01

    A spectra-matching optimization algorithm, designed for hyperspectral sensors, has been implemented to process SeaWiFS-derived multi-spectral water-leaving radiance data. The algorithm has been tested over Southwest Florida coastal waters. The total spectral absorption and backscattering coefficients can be well partitioned with the inversion algorithm, resulting in RMS errors generally less than 5% in the modeled spectra. For extremely turbid waters that come from either river runoff or sediment resuspension, the RMS error is in the range of 5-15%. The bio-optical parameters derived in this optically complex environment agree well with those obtained in situ. Further, the ability to separate backscattering (a proxy for turbidity) from the satellite signal makes it possible to trace water movement patterns, as indicated by the total absorption imagery. The derived patterns agree with those from concurrent surface drifters. For waters where CDOM overwhelmingly dominates the optical signal, however, the procedure tends to regard CDOM as the sole source of absorption, implying the need for better atmospheric correction and for adjustment of some model coefficients for this particular region.

  2. Real-time processing of radar return on a parallel computer

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1992-01-01

    NASA is working with the FAA to demonstrate the feasibility of pulse Doppler radar as a candidate airborne sensor to detect low altitude windshears. The need to provide the pilot with timely information about possible hazards has motivated a demand for real-time processing of a radar return. Investigated here is parallel processing as a means of accommodating the high data rates required. A PC based parallel computer, called the transputer, is used to investigate issues in real time concurrent processing of radar signals. A transputer network is made up of an array of single instruction stream processors that can be networked in a variety of ways. They are easily reconfigured and software development is largely independent of the particular network topology. The performance of the transputer is evaluated in light of the computational requirements. A number of algorithms have been implemented on the transputers in OCCAM, a language specially designed for parallel processing. These include signal processing algorithms such as the Fast Fourier Transform (FFT), pulse-pair, and autoregressive modelling, as well as routing software to support concurrency. The most computationally intensive task is estimating the spectrum. Two approaches have been taken on this problem, the first and most conventional of which is to use the FFT. By using table look-ups for the basis function and other optimizing techniques, an algorithm has been developed that is sufficient for real time. The other approach is to model the signal as an autoregressive process and estimate the spectrum based on the model coefficients. This technique is attractive because it does not suffer from the spectral leakage problem inherent in the FFT. Benchmark tests indicate that autoregressive modeling is feasible in real time.

  3. Accelerating Wright–Fisher Forward Simulations on the Graphics Processing Unit

    PubMed Central

    Lawrie, David S.

    2017-01-01

    Forward Wright–Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright–Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called “embarrassingly parallel,” consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. PMID:28768689

  4. Concurrent image-based visual servoing with adaptive zooming for non-cooperative rendezvous maneuvers

    NASA Astrophysics Data System (ADS)

    Pomares, Jorge; Felicetti, Leonard; Pérez, Javier; Emami, M. Reza

    2018-02-01

    An image-based servo controller for the guidance of a spacecraft during non-cooperative rendezvous is presented in this paper. The controller directly utilizes the visual features from image frames of a target spacecraft for computing both attitude and orbital maneuvers concurrently. The utilization of adaptive optics, such as zooming cameras, is also addressed through developing an invariant-image servo controller. The controller allows for performing rendezvous maneuvers independently from the adjustments of the camera focal length, improving the performance and versatility of maneuvers. The stability of the proposed control scheme is proven analytically in the invariant space, and its viability is explored through numerical simulations.

  5. Nutation control during precession of a spin-stabilized spacecraft

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Precession maneuver control laws for single-spin spacecraft are investigated so that nutation is concurrently controlled. Analysis has led to the development of two types of control laws employing precession modulation for concurrent nutation control. Results were verified through digital simulation of a Synchronous Meteorological Satellite (SMS) configuration. An addition research effort was undertaken to investigate the cause and elimination of nutation anomalies in dual-spin spacecraft. A literature search was conducted and a dual-spin configuration was simulated to verify that nutational anomalies are not predicted by the existing nonlinear model. No conclusions were drawn as to the cause of the observed nutational anomalies in dual-spin spacecraft.

  6. Concurrent planning and execution for a walking robot

    NASA Astrophysics Data System (ADS)

    Simmons, Reid

    1990-07-01

    The Planetary Rover project is developing the Ambler, a novel legged robot, and an autonomous software system for walking the Ambler over rough terrain. As part of the project, we have developed a system that integrates perception, planning, and real-time control to navigate a single leg of the robot through complex obstacle courses. The system is integrated using the Task Control Architecture (TCA), a general-purpose set of utilities for building and controlling distributed mobile robot systems. The walking system, as originally implemented, utilized a sequential sense-plan-act control cycle. This report describes efforts to improve the performance of the system by concurrently planning and executing steps. Concurrency was achieved by modifying the existing sequential system to utilize TCA features such as resource management, monitors, temporal constraints, and hierarchical task trees. Performance was increased in excess of 30 percent with only a relatively modest effort to convert and test the system. The results lend support to the utility of using TCA to develop complex mobile robot systems.

  7. NSTX-U Advances in Real-Time C++11 on Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Keith G.

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less

  8. NSTX-U Advances in Real-Time C++11 on Linux

    DOE PAGES

    Erickson, Keith G.

    2015-08-14

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less

  9. The associations of social self-control, personality disorders, and demographics with drug use among high-risk youth.

    PubMed

    Sussman, Steve; McCuller, William J; Dent, Clyde W

    2003-08-01

    A 10-item self-report measure of social self-control was examined for its association with substance use, controlling for its associations with 12 personality disorder indices and 4 demographic variables among a sample of 1050 high-risk youth. Social self-control was found to be associated with 30-day cigarette smoking, alcohol use, marijuana use, and hard drug use, controlling for these other variables. The most consistent concurrent predictors of substance use were male gender, antisocial personality disorder, and social self-control. These results highlight the importance of social self-control as a unique concurrent predictor of substance use and suggest that social self-control skill training is relevant in substance abuse prevention programming.

  10. Knowing Minds, Controlling Actions: The Developmental Relations between Theory of Mind and Executive Function from 2 to 4 Years of Age

    ERIC Educational Resources Information Center

    Muller, Ulrich; Liebermann-Finestone, Dana P.; Carpendale, Jeremy I. M.; Hammond, Stuart I.; Bibok, Maximilian B.

    2012-01-01

    This longitudinal study examined the concurrent and predictive relations between executive function (EF) and theory of mind (ToM) in 82 preschoolers who were assessed when they were 2, 3, and 4 years old. The results showed that the concurrent relation between EF and ToM, after controlling for age, verbal ability, and sex, was significant at 3 and…

  11. Mitigating fluorescence spectral overlap in wide-field endoscopic imaging

    PubMed Central

    Hou, Vivian; Nelson, Leonard Y.; Seibel, Eric J.

    2013-01-01

    Abstract. The number of molecular species suitable for multispectral fluorescence imaging is limited due to the overlap of the emission spectra of indicator fluorophores, e.g., dyes and nanoparticles. To remove fluorophore emission cross-talk in wide-field multispectral fluorescence molecular imaging, we evaluate three different solutions: (1) image stitching, (2) concurrent imaging with cross-talk ratio subtraction algorithm, and (3) frame-sequential imaging. A phantom with fluorophore emission cross-talk is fabricated, and a 1.2-mm ultrathin scanning fiber endoscope (SFE) is used to test and compare these approaches. Results show that fluorophore emission cross-talk could be successfully avoided or significantly reduced. Near term, the concurrent imaging method of wide-field multispectral fluorescence SFE is viable for early stage cancer detection and localization in vivo. Furthermore, a means to enhance exogenous fluorescence target-to-background ratio by the reduction of tissue autofluorescence background is demonstrated. PMID:23966226

  12. Multiprocessor smalltalk: Implementation, performance, and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pallas, J.I.

    1990-01-01

    Multiprocessor Smalltalk demonstrates the value of object-oriented programming on a multiprocessor. Its implementation and analysis shed light on three areas: concurrent programming in an object oriented language without special extensions, implementation techniques for adapting to multiprocessors, and performance factors in the resulting system. Adding parallelism to Smalltalk code is easy, because programs already use control abstractions like iterators. Smalltalk's basic control and concurrency primitives (lambda expressions, processes and semaphores) can be used to build parallel control abstractions, including parallel iterators, parallel objects, atomic objects, and futures. Language extensions for concurrency are not required. This implementation demonstrates that it is possiblemore » to build an efficient parallel object-oriented programming system and illustrates techniques for doing so. Three modification tools-serialization, replication, and reorganization-adapted the Berkeley Smalltalk interpreter to the Firefly multiprocessor. Multiprocessor Smalltalk's performance shows that the combination of multiprocessing and object-oriented programming can be effective: speedups (relative to the original serial version) exceed 2.0 for five processors on all the benchmarks; the median efficiency is 48%. Analysis shows both where performance is lost and how to improve and generalize the experimental results. Changes in the interpreter to support concurrency add at most 12% overhead; better access to per-process variables could eliminate much of that. Changes in the user code to express concurrency add as much as 70% overhead; this overhead could be reduced to 54% if blocks (lambda expressions) were reentrant. Performance is also lost when the program cannot keep all five processors busy.« less

  13. Effects of concurrent and aerobic exercises on postexercise hypotension in elderly hypertensive men.

    PubMed

    Ferrari, Rodrigo; Umpierre, Daniel; Vogel, Guilherme; Vieira, Paulo J C; Santos, Lucas P; de Mello, Renato Bandeira; Tanaka, Hirofumi; Fuchs, Sandra C

    2017-11-01

    Despite the fact that simultaneous performance of resistance and aerobic exercises (i.e., concurrent exercise) has become a standard exercise prescription for the elderly, no information is available on its effects on post-exercise hypotension (PEH) in elderly men with hypertension. To compare the effects of different types of exercise on PEH in elderly men with hypertension. Twenty elderly men with essential hypertension participated in three crossover interventions, in random order, and on separate days: a non-exercise control session at seated rest, aerobic exercise performed for 45min, and 45min of concurrent resistance and aerobic exercise consisted of 4 sets of 8 repetitions at 70% 1RM of resistance exercise followed by aerobic exercise on treadmill. After each session, blood pressure (BP) was measured continuously for 1h in the laboratory and for 24h under ambulatory conditions. During the first hour in laboratory, diastolic BP was lower after aerobic (-5mmHg) and concurrent exercise (-6mmHg) in comparison with Control. Day-time diastolic BP was significantly lower after aerobic exercise (-7mmHg) when compared to the control. No significant differences were found among the three experimental sessions for night-time and 24-hour diastolic BP, as well as day-time, night-time and 24-hour systolic BP. Concurrent exercise produced acute PEH similar to aerobic exercise but such effect did not last as long as aerobic exercise in elderly patients with essential hypertension. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Phase fluctuation spectra: New radio science information to become available in the DSN tracking system Mark III-77

    NASA Technical Reports Server (NTRS)

    Berman, A. L.

    1977-01-01

    An algorithm was developed for the continuous and automatic computation of Doppler noise concurrently at four sample rate intervals, evenly spanning three orders of magnitude. Average temporal Doppler phase fluctuation spectra will be routinely available in the DSN tracking system Mark III-77 and require little additional processing. The basic (noise) data will be extracted from the archival tracking data file (ATDF) of the tracking data management system.

  15. Weekly Cisplatin-Based Concurrent Chemoradiotherapy for Treatment of Locally Advanced Head and Neck Cancer: a Single Institution Study.

    PubMed

    Ghosh, Saptarshi; Rao, Pamidimukkala Brahmananda; Kumar, P Ravindra; Manam, Surendra

    2015-01-01

    The organ preservation approach of choice for the treatment of locally advanced head and neck cancers is concurrent chemoradiation with three weekly high doses of cisplatin. Although this is an efficacious treatment policy, it has high acute systemic and mucosal toxicities, which lead to frequent treatment breaks and increased overall treatment time. Hence, the current study was undertaken to evaluate the efficacy of concurrent chemoradiation using 40 mg/m2 weekly cisplatin. This is a single institutional retrospective study including the data of 266 locally advanced head and neck cancer patients who were treated with concurrent chemoradiation using 40 mg/m2 weekly cisplatin from January 2012 to January 2014. A p-value of < 0.05 was taken to be significant statistically for all purposes in the study. The mean age of the study patients was 48.8 years. Some 36.1% of the patients had oral cavity primary tumors. The mean overall treatment time was 57.2 days. With a mean follow up of 15.2 months for all study patients and 17.5 months for survivors, 3 year local control, locoregional control and disease free survival were seen in 62.8%, 42.8% and 42.1% of the study patients. Primary tumor site, nodal stage of disease, AJCC stage of the disease and number of cycles of weekly cisplatin demonstrated statistically significant correlations with 3 year local control, locoregional control and disease free survival. Concurrent chemoradiotherapy with moderate dose weekly cisplatin is an efficacious treatment regime for locally advanced head and neck cancers with tolerable toxicity which can be used in developing countries with limited resources.

  16. Cardiac Vagal Tone and Quality of Parenting Show Concurrent and Time-Ordered Associations That Diverge in Abusive, Neglectful, and Non-Maltreating Mothers

    PubMed Central

    Skowron, Elizabeth A.; Cipriano-Essel, Elizabeth; Benjamin, Lorna Smith; Pincus, Aaron L.; Van Ryzin, Mark J.

    2014-01-01

    Concurrent and lagged maternal respiratory sinus arrhythmia (RSA) was monitored in the context of parenting. One hundred and forty-one preschooler-mother dyads—involved with child welfare as documented perpetrators of child abuse or neglect, or non-maltreating (non-CM)—were observed completing a resting baseline and joint challenge task. Parenting behaviors were coded using SASB (Benjamin, 1996) and maternal RSA was simultaneously monitored, longitudinally-nested within-person (WP), and subjected to MLM. Abusive and neglectful mothers displayed less positive parenting and more strict/hostile control, relative to non-CM mothers. Non-CM mothers displayed greater WP heterogeneity in variance over time in their RSA scores, and greater consistency over time in their parenting behaviors, relative to abusive or neglectful mothers. CM group also moderated concurrent and lagged WP associations in RSA and positive parenting. When abusive mothers displayed lower RSA in a given epoch, relative to their task average, they showed concurrent increases in positive parenting, and higher subsequent levels of hostile control in the following epoch, suggesting that it is physiologically taxing for abusive mothers to parent in positive ways. In contrast, lagged effects for non-CM mothers were observed in which RSA decreases led to subsequent WP increases in positive parenting and decreases in control. Reversed models were significant only for neglectful mothers: Increases in positive parenting led to subsequent increases in RSA levels, and increases in strict, hostile control led to subsequent RSA decreases. These results provide new evidence that concurrent and time-ordered coupling in maternal physiology and behavior during parenting vary in theoretically meaningful ways across CM and non-CM mothers. Implications for intervention and study limitations are discussed. PMID:24729945

  17. Concurrent partnerships and HIV: an inconvenient truth

    PubMed Central

    2011-01-01

    The strength of the evidence linking concurrency to HIV epidemic severity in southern and eastern Africa led the Joint United Nations Programme on HIV/AIDS and the Southern African Development Community in 2006 to conclude that high rates of concurrent sexual partnerships, combined with low rates of male circumcision and infrequent condom use, are major drivers of the AIDS epidemic in southern Africa. In a recent article in the Journal of the International AIDS Society, Larry Sawers and Eileen Stillwaggon attempt to challenge the evidence for the importance of concurrency and call for an end to research on the topic. However, their "systematic review of the evidence" is not an accurate summary of the research on concurrent partnerships and HIV, and it contains factual errors concerning the measurement and mathematical modelling of concurrency. Practical prevention-oriented research on concurrency is only just beginning. Most interventions to raise awareness about the risks of concurrency are less than two years old; few evaluations and no randomized-controlled trials of these programmes have been conducted. Determining whether these interventions can help people better assess their own risks and take steps to reduce them remains an important task for research. This kind of research is indeed the only way to obtain conclusive evidence on the role of concurrency, the programmes needed for effective prevention, the willingness of people to change behaviour, and the obstacles to change. PMID:21406080

  18. Control algorithms and applications of the wavefront sensorless adaptive optics

    NASA Astrophysics Data System (ADS)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  19. Stochastic Dual Algorithm for Voltage Regulation in Distribution Networks with Discrete Loads: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan

    This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  20. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CHmore » $$_{4}$$ and CO$$_{2}$$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.« less

  1. A new bio-optical algorithm for the remote sensing of algal blooms in complex ocean waters

    NASA Astrophysics Data System (ADS)

    Shanmugam, Palanisamy

    2011-04-01

    A new bio-optical algorithm has been developed to provide accurate assessments of chlorophyll a (Chl a) concentration for detection and mapping of algal blooms from satellite data in optically complex waters, where the presence of suspended sediments and dissolved substances can interfere with phytoplankton signal and thus confound conventional band ratio algorithms. A global data set of concurrent measurements of pigment concentration and radiometric reflectance was compiled and used to develop this algorithm that uses the normalized water-leaving radiance ratios along with an algal bloom index (ABI) between three visible bands to determine Chl a concentrations. The algorithm is derived using Sea-viewing Wide Field-of-view Sensor bands, and it is subsequently tuned to be applicable to Moderate Resolution Imaging Spectroradiometer (MODIS)/Aqua data. When compared with large in situ data sets and satellite matchups in a variety of coastal and ocean waters the present algorithm makes good retrievals of the Chl a concentration and shows statistically significant improvement over current global algorithms (e.g., OC3 and OC4v4). An examination of the performance of these algorithms on several MODIS/Aqua images in complex waters of the Arabian Sea and west Florida shelf shows that the new algorithm provides a better means for detecting and differentiating algal blooms from other turbid features, whereas the OC3 algorithm has significant errors although yielding relatively consistent results in clear waters. These findings imply that, provided that an accurate atmospheric correction scheme is available to deal with complex waters, the current MODIS/Aqua, MERIS and OCM data could be extensively used for quantitative and operational monitoring of algal blooms in various regional and global waters.

  2. Resting and reactive frontal brain electrical activity (EEG) among a non-clinical sample of socially anxious adults: Does concurrent depressive mood matter?

    PubMed Central

    Beaton, Elliott A; Schmidt, Louis A; Ashbaugh, Andrea R; Santesso, Diane L; Antony, Martin M; McCabe, Randi E

    2008-01-01

    A number of studies have noted that the pattern of resting frontal brain electrical activity (EEG) is related to individual differences in affective style in healthy infants, children, and adults and some clinical populations when symptoms are reduced or in remission. We measured self-reported trait shyness and sociability, concurrent depressive mood, and frontal brain electrical activity (EEG) at rest and in anticipation of a speech task in a non-clinical sample of healthy young adults selected for high and low social anxiety. Although the patterns of resting and reactive frontal EEG asymmetry did not distinguish among individual differences in social anxiety, the pattern of resting frontal EEG asymmetry was related to trait shyness after controlling for concurrent depressive mood. Individuals who reported a higher degree of shyness were likely to exhibit greater relative right frontal EEG activity at rest. However, trait shyness was not related to frontal EEG asymmetry measured during the speech-preparation task, even after controlling for concurrent depressive mood. These findings replicate and extend prior work on resting frontal EEG asymmetry and individual differences in affective style in adults. Findings also highlight the importance of considering concurrent emotional states of participants when examining psychophysiological correlates of personality. PMID:18728822

  3. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  4. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  5. Horizontal vectorization of electron repulsion integrals.

    PubMed

    Pritchard, Benjamin P; Chow, Edmond

    2016-10-30

    We present an efficient implementation of the Obara-Saika algorithm for the computation of electron repulsion integrals that utilizes vector intrinsics to calculate several primitive integrals concurrently in a SIMD vector. Initial benchmarks display a 2-4 times speedup with AVX instructions over comparable scalar code, depending on the basis set. Speedup over scalar code is found to be sensitive to the level of contraction of the basis set, and is best for (lAlB|lClD) quartets when lD  = 0 or lB=lD=0, which makes such a vectorization scheme particularly suitable for density fitting. The basic Obara-Saika algorithm, how it is vectorized, and the performance bottlenecks are analyzed and discussed. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Development of a real-time aeroperformance analysis technique for the X-29A advanced technology demonstrator

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Hicks, J. W.; Alexander, R. I.

    1988-01-01

    The X-29A advanced technology demonstrator has shown the practicality and advantages of the capability to compute and display, in real time, aeroperformance flight results. This capability includes the calculation of the in-flight measured drag polar, lift curve, and aircraft specific excess power. From these elements many other types of aeroperformance measurements can be computed and analyzed. The technique can be used to give an immediate postmaneuver assessment of data quality and maneuver technique, thus increasing the productivity of a flight program. A key element of this new method was the concurrent development of a real-time in-flight net thrust algorithm, based on the simplified gross thrust method. This net thrust algorithm allows for the direct calculation of total aircraft drag.

  7. Dynamically Reconfigurable Systolic Array Accelerator

    NASA Technical Reports Server (NTRS)

    Dasu, Aravind; Barnes, Robert

    2012-01-01

    A polymorphic systolic array framework has been developed that works in conjunction with an embedded microprocessor on a field-programmable gate array (FPGA), which allows for dynamic and complimentary scaling of acceleration levels of two algorithms active concurrently on the FPGA. Use is made of systolic arrays and a hardware-software co-design to obtain an efficient multi-application acceleration system. The flexible and simple framework allows hosting of a broader range of algorithms, and is extendable to more complex applications in the area of aerospace embedded systems. FPGA chips can be responsive to realtime demands for changing applications needs, but only if the electronic fabric can respond fast enough. This systolic array framework allows for rapid partial and dynamic reconfiguration of the chip in response to the real-time needs of scalability, and adaptability of executables.

  8. Development and comparisons of wind retrieval algorithms for small unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Bonin, T. A.; Chilson, P. B.; Zielke, B. S.; Klein, P. M.; Leeman, J. R.

    2012-12-01

    Recently, there has been an increase in use of Unmanned Aerial Systems (UASs) as platforms for conducting fundamental and applied research in the lower atmosphere due to their relatively low cost and ability to collect samples with high spatial and temporal resolution. Concurrent with this development comes the need for accurate instrumentation and measurement methods suitable for small meteorological UASs. Moreover, the instrumentation to be integrated into such platforms must be small and lightweight. Whereas thermodynamic variables can be easily measured using well aspirated sensors onboard, it is much more challenging to accurately measure the wind with a UAS. Several algorithms have been developed that incorporate GPS observations as a means of estimating the horizontal wind vector, with each algorithm exhibiting its own particular strengths and weaknesses. In the present study, the performance of three such GPS-based wind-retrieval algorithms has been investigated and compared with wind estimates from rawinsonde and sodar observations. Each of the algorithms considered agreed well with the wind measurements from sounding and sodar data. Through the integration of UAS-retrieved profiles of thermodynamic and kinematic parameters, one can investigate the static and dynamic stability of the atmosphere and relate them to the state of the boundary layer across a variety of times and locations, which might be difficult to access using conventional instrumentation.

  9. Comparison and application of wind retrieval algorithms for small unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Bonin, T. A.; Chilson, P. B.; Zielke, B. S.; Klein, P. M.; Leeman, J. R.

    2013-07-01

    Recently, there has been an increase in use of Unmanned Aerial Systems (UASs) as platforms for conducting fundamental and applied research in the lower atmosphere due to their relatively low cost and ability to collect samples with high spatial and temporal resolution. Concurrent with this development comes the need for accurate instrumentation and measurement methods suitable for small meteorological UASs. Moreover, the instrumentation to be integrated into such platforms must be small and lightweight. Whereas thermodynamic variables can be easily measured using well-aspirated sensors onboard, it is much more challenging to accurately measure the wind with a UAS. Several algorithms have been developed that incorporate GPS observations as a means of estimating the horizontal wind vector, with each algorithm exhibiting its own particular strengths and weaknesses. In the present study, the performance of three such GPS-based wind-retrieval algorithms has been investigated and compared with wind estimates from rawinsonde and sodar observations. Each of the algorithms considered agreed well with the wind measurements from sounding and sodar data. Through the integration of UAS-retrieved profiles of thermodynamic and kinematic parameters, one can investigate the static and dynamic stability of the atmosphere and relate them to the state of the boundary layer across a variety of times and locations, which might be difficult to access using conventional instrumentation.

  10. Artificial Neural Network Approach in Laboratory Test Reporting:  Learning Algorithms.

    PubMed

    Demirci, Ferhat; Akan, Pinar; Kume, Tuncay; Sisman, Ali Riza; Erbayraktar, Zubeyde; Sevinc, Suleyman

    2016-08-01

    In the field of laboratory medicine, minimizing errors and establishing standardization is only possible by predefined processes. The aim of this study was to build an experimental decision algorithm model open to improvement that would efficiently and rapidly evaluate the results of biochemical tests with critical values by evaluating multiple factors concurrently. The experimental model was built by Weka software (Weka, Waikato, New Zealand) based on the artificial neural network method. Data were received from Dokuz Eylül University Central Laboratory. "Training sets" were developed for our experimental model to teach the evaluation criteria. After training the system, "test sets" developed for different conditions were used to statistically assess the validity of the model. After developing the decision algorithm with three iterations of training, no result was verified that was refused by the laboratory specialist. The sensitivity of the model was 91% and specificity was 100%. The estimated κ score was 0.950. This is the first study based on an artificial neural network to build an experimental assessment and decision algorithm model. By integrating our trained algorithm model into a laboratory information system, it may be possible to reduce employees' workload without compromising patient safety. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    PubMed

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  12. Integrating Natural Language Processing and Machine Learning Algorithms to Categorize Oncologic Response in Radiology Reports.

    PubMed

    Chen, Po-Hao; Zafar, Hanna; Galperin-Aizenberg, Maya; Cook, Tessa

    2018-04-01

    A significant volume of medical data remains unstructured. Natural language processing (NLP) and machine learning (ML) techniques have shown to successfully extract insights from radiology reports. However, the codependent effects of NLP and ML in this context have not been well-studied. Between April 1, 2015 and November 1, 2016, 9418 cross-sectional abdomen/pelvis CT and MR examinations containing our internal structured reporting element for cancer were separated into four categories: Progression, Stable Disease, Improvement, or No Cancer. We combined each of three NLP techniques with five ML algorithms to predict the assigned label using the unstructured report text and compared the performance of each combination. The three NLP algorithms included term frequency-inverse document frequency (TF-IDF), term frequency weighting (TF), and 16-bit feature hashing. The ML algorithms included logistic regression (LR), random decision forest (RDF), one-vs-all support vector machine (SVM), one-vs-all Bayes point machine (BPM), and fully connected neural network (NN). The best-performing NLP model consisted of tokenized unigrams and bigrams with TF-IDF. Increasing N-gram length yielded little to no added benefit for most ML algorithms. With all parameters optimized, SVM had the best performance on the test dataset, with 90.6 average accuracy and F score of 0.813. The interplay between ML and NLP algorithms and their effect on interpretation accuracy is complex. The best accuracy is achieved when both algorithms are optimized concurrently.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorne, N; Kassaee, A

    Purpose: To develop an algorithm which can calculate the Full Width Half Maximum (FWHM) of a Proton Pencil Beam from a 2D dimensional ion chamber array (IBA Matrixx) with limited spatial resolution ( 7.6 mm inter chamber distance). The algorithm would allow beam FWHM measurements to be taken during daily QA without an appreciable time increase. Methods: Combinations of 147 MeV single spot beams were delivered onto an IBA Matrixx and concurrently on EBT3 films for a standard. Data were collected around the Bragg Peak region and evaluated by a custom MATLAB script based on our algorithm using a leastmore » squared analysis. A set of artificial data, modified with random noise, was also processed to test for robustness. Results: The Matlab script processed Matixx data shows acceptable agreement (within 5%) with film measurements with no single measurement differing by more than 1.8 mm. In cases where the spots show some degree of asymmetry, the algorithm is able to resolve the differences. The algorithm was able to process artificial data with noise up to 15% of the maximum value. Time assays of each measurement took less than 3 minutes to perform, indicating that such measurements may be efficiently added to daily QA treatment. Conclusion: The developed algorithm can be implemented in daily QA program for Proton Pencil Beam scanning beams (PBS) with Matrixx to extract spot size and position information. The developed algorithm may be extended to small field sizes in photon clinic.« less

  14. Evaluation of ERTS multispectral signatures in relation to ground control signatures using a nested-sampling approach

    NASA Technical Reports Server (NTRS)

    Lyon, R. J. P. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Ground measured spectral signatures of wavelength bands matching ERTS MSS were collected using a radiometer at several Californian and Nevadan sites, and directly compared with similar data from ERTS CCTs. The comparison was tested at the highest possible spatial resolution for ERTS, using deconvoluted MSS data, and contrasted with that of ground measured spectra, originally from 1 meter squares. In the mobile traverses of the grassland sites, these one meter fields of view were integrated into eighty meter transects along the five km track across four major rock/soil types. Suitable software was developed to read the MSS CCT tapes, to shadeprint individual bands with user-determined greyscale stretching. Four new algorithms for unsupervised and supervised, normalized and unnormalized clustering were developed, into a program termed STANSORT. Parallel software allowed the field data to be calibrated, and by using concurrently continuously collected, upward- and downward-viewing, 4 band radiometers, bidirectional reflectances could be calculated.

  15. Outcomes of Concurrent Operations: Results From the American College of Surgeons' National Surgical Quality Improvement Program.

    PubMed

    Liu, Jason B; Berian, Julia R; Ban, Kristen A; Liu, Yaoming; Cohen, Mark E; Angelos, Peter; Matthews, Jeffrey B; Hoyt, David B; Hall, Bruce L; Ko, Clifford Y

    2017-09-01

    To determine whether concurrently performed operations are associated with an increased risk for adverse events. Concurrent operations occur when a surgeon is simultaneously responsible for critical portions of 2 or more operations. How this practice affects patient outcomes is unknown. Using American College of Surgeons' National Surgical Quality Improvement Program data from 2014 to 2015, operations were considered concurrent if they overlapped by ≥60 minutes or in their entirety. Propensity-score-matched cohorts were constructed to compare death or serious morbidity (DSM), unplanned reoperation, and unplanned readmission in concurrent versus non-concurrent operations. Multilevel hierarchical regression was used to account for the clustered nature of the data while controlling for procedure and case mix. There were 1430 (32.3%) surgeons from 390 (77.7%) hospitals who performed 12,010 (2.3%) concurrent operations. Plastic surgery (n = 393 [13.7%]), otolaryngology (n = 470 [11.2%]), and neurosurgery (n = 2067 [8.4%]) were specialties with the highest proportion of concurrent operations. Spine procedures were the most frequent concurrent procedures overall (n = 2059/12,010 [17.1%]). Unadjusted rates of DSM (9.0% vs 7.1%; P < 0.001), reoperation (3.6% vs 2.7%; P < 0.001), and readmission (6.9% vs 5.1%; P < 0.001) were greater in the concurrent operation cohort versus the non-concurrent. After propensity score matching and risk-adjustment, there was no significant association of concurrence with DSM (odds ratio [OR] 1.08; 95% confidence interval [CI] 0.96-1.21), reoperation (OR 1.16; 95% CI 0.96-1.40), or readmission (OR 1.14; 95% CI 0.99-1.29). In these analyses, concurrent operations were not detected to increase the risk for adverse outcomes. These results do not lessen the need for further studies, continuous self-regulation and proactive disclosure to patients.

  16. A semianalytical MERIS green-red band algorithm for identifying phytoplankton bloom types in the East China Sea

    NASA Astrophysics Data System (ADS)

    Tao, Bangyi; Mao, Zhihua; Lei, Hui; Pan, Delu; Bai, Yan; Zhu, Qiankun; Zhang, Zhenglong

    2017-03-01

    A new bio-optical algorithm based on the green and red bands of the Medium Resolution Imaging Spectrometer (MERIS) is developed to differentiate the harmful algal blooms of Prorocentrum donghaiense Lu (P. donghaiense) from diatom blooms in the East China Sea (ECS). Specifically, a novel green-red index (GRI), actually an indicator for a(510) of bloom waters, is retrieved from a semianalytical bio-optical model based on the green and red bands of phytoplankton-absorption and backscattering spectra. In addition, a MERIS-based diatom index (DIMERIS) is derived by adjusting a Moderate Resolution Imaging Spectroradiometer (MODIS) diatom index algorithm to the MERIS bands. Finally, bloom types are effectively differentiated in the feature spaces of the green-red index and DIMERIS. Compared with three previous MERIS-based quasi-analytical algorithm (QAA) algorithms and three existing classification methods, the proposed GRI and classification method have the best discrimination performance when using the MERIS data. Further validations of the algorithm by using several MERIS image series and near-concurrent in situ observations indicate that our algorithm yields the best classification accuracy and thus can be used to reliably detect and classify P. donghaiense and diatom blooms in the ECS. This is the first time that the MERIS data have been used to identify bloom types in the ECS. Our algorithm can also be used for the successor of the MERIS, the Ocean and Land Color Instrument, which will aid the long-term observation of species succession in the ECS.

  17. UAVSAR: Airborne L-band Radar for Repeat Pass Interferometry

    NASA Technical Reports Server (NTRS)

    Moes, Timothy R.

    2009-01-01

    The primary objectives of the UAVSAR Project were to: a) develop a miniaturized polarimetric L-band synthetic aperture radar (SAR) for use on an unmanned aerial vehicle (UAV) or piloted vehicle. b) develop the associated processing algorithms for repeat-pass differential interferometric measurements using a single antenna. c) conduct measurements of geophysical interest, particularly changes of rapidly deforming surfaces such as volcanoes or earthquakes. Two complete systems were developed. Operational Science Missions began on February 18, 2009 ... concurrent development and testing of the radar system continues.

  18. AFL-1: A programming Language for Massively Concurrent Computers.

    DTIC Science & Technology

    1986-11-01

    Bibliography Ackley, D.H., Hinton, G.E., Sejnowski, T.J., "A Learning Algorithm for boltzmann Machines", Cognitive Science, 1985, 9, 147-169. Agre...P.E., "Routines", Memo 828, MIT AI Laboratory, Many 1985. Ballard, D.H., Hayes, P.J., "Parallel Logical Inference", Conference of the Cognitive Science...34Experiments on Semantic Memory and Language Com- 125 prehension", in L.W. Greg (Ed.), Cognition in Learning and Memory, New York, Wiley, 1972._ Collins

  19. Performance Evaluation of Parallel Algorithms and Architectures in Concurrent Multiprocessor Systems

    DTIC Science & Technology

    1988-09-01

    HEP and Other Parallel processors, Report no. ANL-83-97, Argonne National Laboratory, Argonne, Ill. 1983. [19] Davidson, G . S. A Practical Paradigm for...IEEE Comp. Soc., 1986. [241 Peir, Jih-kwon, and D. Gajski , "CAMP: A Programming Aide For Multiprocessors," Proc. 1986 ICPP, IEEE Comp. Soc., pp475...482. [251 Pfister, G . F., and V. A. Norton, "Hot Spot Contention and Combining in Multistage Interconnection Networks,"IEEE Trans. Comp., C-34, Oct

  20. Concurrent malaria and typhoid fever in the tropics: the diagnostic challenges and public health implications.

    PubMed

    Uneke, C J

    2008-06-01

    Malaria and typhoid fever still remain diseases of major public health importance in the tropics. Individuals in areas endemic for both the diseases are at substantial risk of contracting both these diseases, either concurrently or an acute infection superimposed on a chronic one. The objective of this report was to systematically review scientific data from studies conducted in the tropics on concurrent malaria and typhoid fever within the last two decades (1987-2007), to highlight the diagnostic challenges and the public health implications. Using the MedLine Entrez-PubMed search, relevant publications were identified for the review via the key words Malaria and Typhoid fever, which yielded 287 entries as of January 2008. Most of the studies reviewed expressed concern that poor diagnosis continues to hinder effective control of concurrent malaria and typhoid fever in the tropics due to: non-specific clinical presentation of the diseases; high prevalence of asymptomatic infections; lack of resources and insufficient access to trained health care providers and facilities; and widespread practice of self-treatment for clinically suspected malaria or typhoid fever. There were considerably higher rates of concurrent malaria and typhoid fever by Widal test compared to the bacteriological culture technique. Although culture technique remains the gold standard in typhoid fever diagnosis, Widal test is still of significant diagnostic value provided judicious interpretation of the test is made against a background of pertinent information. Malaria could be controlled through interventions to minimize human-vector contact, while improved personal hygiene, targeted vaccination campaigns and intensive community health education could help to control typhoid fever in the tropics.

  1. Preliminary evidence of improved cognitive performance following vestibular rehabilitation in children with combined ADHD (cADHD) and concurrent vestibular impairment.

    PubMed

    Lotfi, Younes; Rezazadeh, Nima; Moossavi, Abdollah; Haghgoo, Hojjat Allah; Rostami, Reza; Bakhshi, Enayatollah; Badfar, Faride; Moghadam, Sedigheh Farokhi; Sadeghi-Firoozabadi, Vahid; Khodabandelou, Yousef

    2017-12-01

    Balance function has been reported to be worse in ADHD children than in their normal peers. The present study hypothesized that an improvement in balance could result in better cognitive performance in children with ADHD and concurrent vestibular impairment. This study was designed to evaluate the effects of comprehensive vestibular rehabilitation therapy on the cognitive performance of children with combined ADHD and concurrent vestibular impairment. Subject were 54 children with combined ADHD. Those with severe vestibular impairment (n=33) were randomly assigned to two groups that were matched for age. A rehabilitation program comprising overall balance and gate, postural stability, and eye movement exercises was assigned to the intervention group. Subjects in the control group received no intervention for the same time period. Intervention was administered twice weekly for 12 weeks. Choice reaction time (CRT) and spatial working memory (SWM) subtypes of the Cambridge Neuropsychological Test Automated Battery (CANTAB) were completed pre- and post-intervention to determine the effects of vestibular rehabilitation on the cognitive performance of the subjects with ADHD and concurrent vestibular impairment. ANCOVA was used to compare the test results of the intervention and control group post-test. The percentage of correct trial scores for the CRT achieved by the intervention group post-test increased significantly compared to those of the control group (p=0.029). The CRT mean latency scores were significantly prolonged in the intervention group following intervention (p=0.007) compared to the control group. No significant change was found in spatial functioning of the subjects with ADHD following 12 weeks of intervention (p>0.05). The study highlights the effect of vestibular rehabilitation on the cognitive performance of children with combined ADHD and concurrent vestibular disorder. The findings indicate that attention can be affected by early vestibular rehabilitation, which is a basic program for improving memory function in such children. Appropriate vestibular rehabilitation programs based on the type of vestibular impairment of children can improve their cognitive ability to some extent in children with ADHD and concurrent vestibular impairment (p>0.05). Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Energy drinks and alcohol-related risk among young adults.

    PubMed

    Caviness, Celeste M; Anderson, Bradley J; Stein, Michael D

    2017-01-01

    Energy drink consumption, with or without concurrent alcohol use, is common among young adults. This study sought to clarify risk for negative alcohol outcomes related to the timing of energy drink use. The authors interviewed a community sample of 481 young adults, aged 18-25, who drank alcohol in the last month. Past-30-day energy drink use was operationalized as no-use, use without concurrent alcohol, and concurrent use of energy drinks with alcohol ("within a couple of hours"). Negative alcohol outcomes included past-30-day binge drinking, past-30-day alcohol use disorder, and drinking-related consequences. Just over half (50.5%) reported no use of energy drinks,18.3% reported using energy drinks without concurrent alcohol use, and 31.2% reported concurrent use of energy drinks and alcohol. Relative to those who reported concurrent use of energy drinks with alcohol, and controlling for background characteristics and frequency of alcohol consumption, those who didn't use energy drinks and those who used without concurrent alcohol use had significantly lower binge drinking, negative consequences, and rates of alcohol use disorder (P < .05 for all outcomes). There were no significant differences between the no-use and energy drink without concurrent alcohol groups on any alcohol-related measure (P > .10 for all outcomes). Concurrent energy drink and alcohol use is associated with increased risk for negative alcohol consequences in young adults. Clinicians providing care to young adults could consider asking patients about concurrent energy drink and alcohol use as a way to begin a conversation about risky alcohol consumption while addressing 2 substances commonly used by this population.

  3. Ventilation duct with concurrent acoustic feed-forward and decentralised structural feedback active control

    NASA Astrophysics Data System (ADS)

    Rohlfing, J.; Gardonio, P.

    2014-02-01

    This paper presents theoretical and experimental work on concurrent active noise and vibration control for a ventilation duct. The active noise control system is used to reduce the air-borne noise radiated via the duct outlet whereas the active vibration control system is used to both reduce the structure-borne noise radiated by the duct wall and to minimise the structural feed-through effect that reduces the effectiveness of the active noise control system. An elemental model based on structural mobility functions and acoustic impedance functions has been developed to investigate the principal effects and limitations of feed-forward active noise control and decentralised velocity feedback vibration control. The principal simulation results have been contrasted and validated with measurements taken on a laboratory duct set-up, equipped with an active noise control system and a decentralised vibration control system. Both simulations and experimental results show that the air-borne noise radiated from the duct outlet can be significantly attenuated using the feed-forward active noise control. In the presence of structure-borne noise the performance of the active noise control system is impaired by a structure-borne feed-through effect. Also the sound radiation from the duct wall is increased. In this case, if the active noise control is combined with a concurrent active vibration control system, the sound radiation by the duct outlet is further reduced and the sound radiation from the duct wall at low frequencies reduces noticeably.

  4. Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.

    PubMed

    Gauthier, P-A; Lecomte, P; Berry, A

    2017-04-01

    Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.

  5. Factors influencing warfarin control in Australia and Singapore.

    PubMed

    Bernaitis, Nijole; Ching, Chi Keong; Teo, Siew Chong; Chen, Liping; Badrick, Tony; Davey, Andrew K; Crilly, Julia; Anoopkumar-Dukie, Shailendra

    2017-09-01

    Warfarin is widely used for patients with non-valvular atrial fibrillation (NVAF). Variations in warfarin control, as measured by time in therapeutic range (TTR), have been reported across different regions and ethnicities, particularly between Western and Asian countries. However, there is limited data on comparative factors influencing warfarin control in Caucasian and Asian patients. Therefore, the aim of this study was to determine warfarin control and potential factors influencing this in patients with NVAF in Australia and Singapore. Retrospective data was collected for patients receiving warfarin for January to June 2014 in Australia and Singapore. TTR was calculated for individuals with mean patient TTR used for analysis. Possible influential factors on TTR were analysed including age, gender, concurrent co-morbidities, and concurrent medication. The mean TTR was significantly higher in Australia (82%) than Singapore (58%). At both sites, chronic kidney disease significantly lowered this TTR. Further factors influencing control were anaemia and age<60years in Australia, and vascular disease, CHA 2 DS 2 -VASc score of 6, and concurrent platelet inhibitor therapy in Singapore. Warfarin control was significantly higher in Australia compared to Singapore, however chronic kidney disease reduced control at both sites. The different levels of control in these two countries, together with patient factors further reducing control may impact on anticoagulant choice in these countries with better outcomes from warfarin in Australia compared to Singapore. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Gender asymmetry in concurrent partnerships and HIV prevalence.

    PubMed

    Leung, Ka Yin; Powers, Kimberly A; Kretzschmar, Mirjam

    2017-06-01

    The structure of the sexual network of a population plays an essential role in the transmission of HIV. Concurrent partnerships, i.e. partnerships that overlap in time, are important in determining this network structure. Men and women may differ in their concurrent behavior, e.g. in the case of polygyny where women are monogamous while men may have concurrent partnerships. Polygyny has been shown empirically to be negatively associated with HIV prevalence, but the epidemiological impacts of other forms of gender-asymmetric concurrency have not been formally explored. Here we investigate how gender asymmetry in concurrency, including polygyny, can affect the disease dynamics. We use a model for a dynamic network where individuals may have concurrent partners. The maximum possible number of simultaneous partnerships can differ for men and women, e.g. in the case of polygyny. We control for mean partnership duration, mean lifetime number of partners, mean degree, and sexually active lifespan. We assess the effects of gender asymmetry in concurrency on two epidemic phase quantities (R 0 and the contribution of the acute HIV stage to R 0 ) and on the endemic HIV prevalence. We find that gender asymmetry in concurrent partnerships is associated with lower levels of all three epidemiological quantities, especially in the polygynous case. This effect on disease transmission can be attributed to changes in network structure, where increasing asymmetry leads to decreasing network connectivity. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  7. Flame Spread and Extinction Over a Thick Solid Fuel in Low-Velocity Opposed and Concurrent Flows

    NASA Astrophysics Data System (ADS)

    Zhu, Feng; Lu, Zhanbin; Wang, Shuangfeng

    2016-05-01

    Flame spread and extinction phenomena over a thick PMMA in purely opposed and concurrent flows are investigated by conducting systematical experiments in a narrow channel apparatus. The present tests focus on low-velocity flow regime and hence complement experimental data previously reported for high and moderate velocity regimes. In the flow velocity range tested, the opposed flame is found to spread much faster than the concurrent flame at a given flow velocity. The measured spread rates for opposed and concurrent flames can be correlated by corresponding theoretical models of flame spread, indicating that existing models capture the main mechanisms controlling the flame spread. In low-velocity gas flows, however, the experimental results are observed to deviate from theoretical predictions. This may be attributed to the neglect of radiative heat loss in the theoretical models, whereas radiation becomes important for low-intensity flame spread. Flammability limits using oxygen concentration and flow velocity as coordinates are presented for both opposed and concurrent flame spread configurations. It is found that concurrent spread has a wider flammable range than opposed case. Beyond the flammability boundary of opposed spread, there is an additional flammable area for concurrent spread, where the spreading flame is sustainable in concurrent mode only. The lowest oxygen concentration allowing concurrent flame spread in forced flow is estimated to be approximately 14 % O2, substantially below that for opposed spread (18.5 % O2).

  8. INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm

    PubMed Central

    Gao, Yanbin; Liu, Shifei; Atia, Mohamed M.; Noureldin, Aboelmagd

    2015-01-01

    This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory. PMID:26389906

  9. INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm.

    PubMed

    Gao, Yanbin; Liu, Shifei; Atia, Mohamed M; Noureldin, Aboelmagd

    2015-09-15

    This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory.

  10. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    NASA Astrophysics Data System (ADS)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.

  11. Tile-Image Merging and Delivering for Virtual Camera Services on Tiled-Display for Real-Time Remote Collaboration

    NASA Astrophysics Data System (ADS)

    Choe, Giseok; Nang, Jongho

    The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 3 × 2 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.

  12. Association between diabetes mellitus, hypothyroidism or hyperadrenocorticism, and atherosclerosis in dogs.

    PubMed

    Hess, Rebecka S; Kass, Philip H; Van Winkle, Thomas J

    2003-01-01

    The objective of this study was to determine whether dogs with atherosclerosis are more likely to have concurrent diabetes mellitus, hypothyroidism, or hyperadrenocorticism than dogs that do not have atherosclerosis. A retrospective mortality prevalence case-control study was performed. The study group included 30 dogs with histopathological evidence of atherosclerosis. The control group included 142 dogs with results of a complete postmortem examination, a final postmortem examination diagnosis of neoplasia, and no histopathological evidence of atherosclerosis. Control dogs were frequency matched for age and year in which the postmortem examination was performed. Proportionate changes in the prevalence of diabetes mellitus, hypothyroidism, and hyperadrenocorticism were calculated by exact prevalence odds ratios (POR), 95% confidence intervals (95% CI), and P values. Multiple logistic regression analysis was used to examine the combined effects of prevalence determinants while controlling for age and year of postmortem examination. Dogs with atherosclerosis were over 53 times more likely to have concurrent diabetes mellitus than dogs without atherosclerosis (POR = 53.6; 95% CI, 4.6-627.5; P = .002) and over 51 times more likely to have concurrent hypothyroidism than dogs without atherosclerosis (POR = 51.1; 95% CI, 14.5-180.1; P < .001). Dogs with atherosclerosis were not found to be more likely to have concurrent hyperadrenocorticism than dogs that did not have atherosclerosis (POR = 1.8; 95% CI, 0.2-17.6; P = .59). Diabetes mellitus and hypothyroidism, but not hyperadrenocorticism, are more prevalent in dogs with atherosclerosis compared to dogs without atherosclerosis on postmortem examination.

  13. [Impact of glutamine, eicosapntemacnioc acid, branched-chain amino acid supplements on nutritional status and treatment compliance of esophageal cancer patients on concurrent chemoradiotherapy and gastric cancer patients on chemotherapy].

    PubMed

    Cong, Minghua; Song, Chenxin; Zou, Baohua; Deng, Yingbing; Li, Shuluan; Liu, Xuehui; Liu, Weiwei; Liu, Jinying; Yu, Lei; Xu, Binghe

    2015-03-17

    To explore the effects of glutamine, eicosapntemacnioc acid (EPA) and branched-chain amino acids supplements in esophageal cancer patients on concurrent chemoradiotherapy and gastric cancer patients on chemotherapy. From April 2013 to April 2014, a total of 104 esophageal and gastric carcinoma patients on chemotherapy or concurrent chemoradiotherapy were recruited and randomly divided into experimental and control groups. Both groups received dietary counseling and routine nutritional supports while only experimental group received supplements of glutamine (20 g/d), EPA (3.3 g/d) and branched-chain amino acids (8 g/d). And body compositions, blood indicators, incidence of complications and completion rates of therapy were compared between two groups. After treatment, free fat mass and muscle weight increased significantly in experiment group while decreased in control group (P < 0.05). And albumin, red blood cell count, white blood cell count and blood platelet count remained stable in experiment group while declined significantly in control group. During treatment, compared to control group, the incidences of infection-associated complication were lower (6% vs 19%, P < 0.05) and the completion rates of therapy were significantly higher in experiment group (96% vs 83%, P < 0.05). Supplements of glutamine, EPA and branched-chain amino acids can help maintain nutrition status, decrease the complications and improve compliance for esophageal cancer patients on concurrent chemo-radiotherapy and gastric cancer patients on postoperative adjuvant chemotherapy.

  14. Radiation therapy in the management of head-and-neck cancer of unknown primary origin: how does the addition of concurrent chemotherapy affect the therapeutic ratio?

    PubMed

    Chen, Allen M; Farwell, D Gregory; Lau, Derick H; Li, Bao-Qing; Luu, Quang; Donald, Paul J

    2011-10-01

    To determine how the addition of cisplatin-based concurrent chemotherapy to radiation therapy influences outcomes among a cohort of patients treated for head-and-neck cancer of unknown primary origin. The medical records of 60 consecutive patients treated by radiation therapy for squamous cell carcinoma of the head and neck presenting as cervical lymph node metastasis of occult primary origin were reviewed. Thirty-two patients (53%) were treated by concurrent chemoradiation, and 28 patients (47%) were treated by radiation therapy alone. Forty-five patients (75%) received radiation therapy after surgical resection, and 15 patients (25%) received primary radiation therapy. Thirty-five patients (58%) were treated by intensity-modulated radiotherapy. The 2-year estimates of overall survival, local-regional control, and progression-free survival were 89%, 89%, and 79%, respectively, among patients treated by chemoradiation, compared to 90%, 92%, and 83%, respectively, among patients treated by radiation therapy alone (p > 0.05, for all). Exploratory analysis failed to identify any subset of patients who benefited from the addition of concurrent chemotherapy to radiation therapy. The use of concurrent chemotherapy was associated with a significantly increased incidence of Grade 3+ acute and late toxicity (p < 0.001, for both). Concurrent chemoradiation is associated with significant toxicity without a clear advantage to overall survival, local-regional control, and progression-free survival in the treatment of head-and-neck cancer of unknown primary origin. Although selection bias cannot be ignored, prospective data are needed to further address this question. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Adaptive adjustment of the randomization ratio using historical control data

    PubMed Central

    Hobbs, Brian P.; Carlin, Bradley P.; Sargent, Daniel J.

    2013-01-01

    Background Prospective trial design often occurs in the presence of “acceptable” [1] historical control data. Typically this data is only utilized for treatment comparison in a posteriori retrospective analysis to estimate population-averaged effects in a random-effects meta-analysis. Purpose We propose and investigate an adaptive trial design in the context of an actual randomized controlled colorectal cancer trial. This trial, originally reported by Goldberg et al. [2], succeeded a similar trial reported by Saltz et al. [3], and used a control therapy identical to that tested (and found beneficial) in the Saltz trial. Methods The proposed trial implements an adaptive randomization procedure for allocating patients aimed at balancing total information (concurrent and historical) among the study arms. This is accomplished by assigning more patients to receive the novel therapy in the absence of strong evidence for heterogeneity among the concurrent and historical controls. Allocation probabilities adapt as a function of the effective historical sample size (EHSS) characterizing relative informativeness defined in the context of a piecewise exponential model for evaluating time to disease progression. Commensurate priors [4] are utilized to assess historical and concurrent heterogeneity at interim analyses and to borrow strength from the historical data in the final analysis. The adaptive trial’s frequentist properties are simulated using the actual patient-level historical control data from the Saltz trial and the actual enrollment dates for patients enrolled into the Goldberg trial. Results Assessing concurrent and historical heterogeneity at interim analyses and balancing total information with the adaptive randomization procedure leads to trials that on average assign more new patients to the novel treatment when the historical controls are unbiased or slightly biased compared to the concurrent controls. Large magnitudes of bias lead to approximately equal allocation of patients among the treatment arms. Using the proposed commensurate prior model to borrow strength from the historical data, after balancing total information with the adaptive randomization procedure, provides admissible estimators of the novel treatment effect with desirable bias-variance trade-offs. Limitations Adaptive randomization methods in general are sensitive to population drift and more suitable for trials that initiate with gradual enrollment. Balancing information among study arms in time-to-event analyses is difficult in the presence of informative right-censoring. Conclusions The proposed design could prove important in trials that follow recent evaluations of a control therapy. Efficient use of the historical controls is especially important in contexts where reliance on pre-existing information is unavoidable because the control therapy is exceptionally hazardous, expensive, or the disease is rare. PMID:23690095

  16. Adaptive adjustment of the randomization ratio using historical control data.

    PubMed

    Hobbs, Brian P; Carlin, Bradley P; Sargent, Daniel J

    2013-01-01

    Prospective trial design often occurs in the presence of 'acceptable' historical control data. Typically, these data are only utilized for treatment comparison in a posteriori retrospective analysis to estimate population-averaged effects in a random-effects meta-analysis. We propose and investigate an adaptive trial design in the context of an actual randomized controlled colorectal cancer trial. This trial, originally reported by Goldberg et al., succeeded a similar trial reported by Saltz et al., and used a control therapy identical to that tested (and found beneficial) in the Saltz trial. The proposed trial implements an adaptive randomization procedure for allocating patients aimed at balancing total information (concurrent and historical) among the study arms. This is accomplished by assigning more patients to receive the novel therapy in the absence of strong evidence for heterogeneity among the concurrent and historical controls. Allocation probabilities adapt as a function of the effective historical sample size (EHSS), characterizing relative informativeness defined in the context of a piecewise exponential model for evaluating time to disease progression. Commensurate priors are utilized to assess historical and concurrent heterogeneity at interim analyses and to borrow strength from the historical data in the final analysis. The adaptive trial's frequentist properties are simulated using the actual patient-level historical control data from the Saltz trial and the actual enrollment dates for patients enrolled into the Goldberg trial. Assessing concurrent and historical heterogeneity at interim analyses and balancing total information with the adaptive randomization procedure lead to trials that on average assign more new patients to the novel treatment when the historical controls are unbiased or slightly biased compared to the concurrent controls. Large magnitudes of bias lead to approximately equal allocation of patients among the treatment arms. Using the proposed commensurate prior model to borrow strength from the historical data, after balancing total information with the adaptive randomization procedure, provides admissible estimators of the novel treatment effect with desirable bias-variance trade-offs. Adaptive randomization methods in general are sensitive to population drift and more suitable for trials that initiate with gradual enrollment. Balancing information among study arms in time-to-event analyses is difficult in the presence of informative right-censoring. The proposed design could prove important in trials that follow recent evaluations of a control therapy. Efficient use of the historical controls is especially important in contexts where reliance on preexisting information is unavoidable because the control therapy is exceptionally hazardous, expensive, or the disease is rare.

  17. Choice Behavior of Nonpathological Women Playing Concurrently Available Slot Machines: Effect of Changes in Payback Percentages

    ERIC Educational Resources Information Center

    Weatherly, Jeffrey N.; Thompson, Bradley J.; Hodny, Marisa; Meier, Ellen

    2009-01-01

    In a simulated casino environment, 6 nonpathological women played concurrently available commercial slot machines programmed to pay out at different rates. Participants did not always demonstrate preferences for the higher paying machine. The data suggest that factors other than programmed or obtained rate of reinforcement may control gambling…

  18. Multiple Homicide as a Function of Prisonization and Concurrent Instrumental Violence: Testing an Interactive Model--A Research Note

    ERIC Educational Resources Information Center

    DeLisi, Matt; Walters, Glenn D.

    2011-01-01

    Prisonization (as measured by number of prior incarcerations) and concurrent instrumental offending (as measured by contemporaneous kidnapping, rape, robbery, and burglary offenses) were found to interact in 160 multiple-homicide offenders and 494 single-homicide offenders. Controlling for age, gender, race, criminal history, prior incarcerations,…

  19. Concurrent treatment with a macrocyclic lactone and benzimidazole provides season long performance advantages in grazing cattle harboring macrocyclic lactone resistant nematodes.

    PubMed

    Edmonds, M D; Vatta, A F; Marchiondo, A A; Vanimisetti, H B; Edmonds, J D

    2018-03-15

    In 2013, a 118-day study was initiated to investigate the efficacy of concurrent treatment at pasture turnout with an injectable macrocyclic lactone with activity up to 28 days and an oral benzimidazole, referred to as "conventional" anthelmintics, when compared to treatment with conventional macrocyclic lactone alone or an injectable macrocyclic lactone with extended activity of 100 days or longer. A group of 210 steers were obtained from a ranch in California and transported to Idaho, USA. A total of 176 steers with the highest fecal egg counts were blocked by pre-treatment body weights and pasture location. A total of 44 pasture paddocks were assigned with 4 steers per paddock with 12 paddocks per therapeutic treatment group and 8 paddocks per controls. The four treatments were injectable doramectin (Dectomax ® , Zoetis Inc., 0.2 mg kg -1 BW, SC), injectable doramectin concurrently with oral albendazole (Valbazen ® , Zoetis Inc., 10 mg kg -1 BW, PO), extended release injectable eprinomectin (LongRange™, Merial Limited, 1 mg kg -1 BW, SC) or saline. Cattle were individually weighed and sampled for fecal egg count on Days 0, 31/32, 61, 88, and 117/118 with an additional fecal sample on Day 14. At conclusion, one steer per paddock was euthanized for nematode recovery. The results from the first 32 days found evidence of macrocyclic lactone resistance against injectable doramectin and extended release eprinomectin. During this period the concurrent therapy provided nearly 100% efficacy based on fecal egg count reduction and a 19.98% improvement in total weight gain compared to controls (P = 0.039). At the conclusion of the 118-day study and past the approved efficacy for the conventional anthelmintics, the concurrent therapy with conventional anthelmintics provided a 22.98% improvement in total weight gain compared to controls (P = 0.004). The 118-day improvement in weight gain for the extended release eprinomectin group (29.06% compared to control) was not statistically different from the concurrent therapy with conventional anthelmintics. The results indicate that concurrent treatment with a conventional macrocyclic lactone and benzimidazole may provide production benefits early in the grazing period that continue throughout the entire period for cattle harboring macrocyclic lactone resistant nematodes. By using two different anthelmintic classes together, macrocyclic lactone resistant parasites were effectively controlled early in the period. Furthermore, the use of an effective conventional anthelmintic treatment regimen without an extended period of drug release may help to promote refugia and decrease the further selection for anthelmintic resistant parasites. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Multidisciplinary Concurrent Design Optimization via the Internet

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Kelkar, Atul G.; Koganti, Gopichand

    2001-01-01

    A methodology is presented which uses commercial design and analysis software and the Internet to perform concurrent multidisciplinary optimization. The methodology provides a means to develop multidisciplinary designs without requiring that all software be accessible from the same local network. The procedures are amenable to design and development teams whose members, expertise and respective software are not geographically located together. This methodology facilitates multidisciplinary teams working concurrently on a design problem of common interest. Partition of design software to different machines allows each constituent software to be used on the machine that provides the most economy and efficiency. The methodology is demonstrated on the concurrent design of a spacecraft structure and attitude control system. Results are compared to those derived from performing the design with an autonomous FORTRAN program.

  1. Exascale computing and what it means for shock physics

    NASA Astrophysics Data System (ADS)

    Germann, Timothy

    2015-06-01

    The U.S. Department of Energy is preparing to launch an Exascale Computing Initiative, to address the myriad challenges required to deploy and effectively utilize an exascale-class supercomputer (i.e., one capable of performing 1018 operations per second) in the 2023 timeframe. Since physical (power dissipation) requirements limit clock rates to at most a few GHz, this will necessitate the coordination of on the order of a billion concurrent operations, requiring sophisticated system and application software, and underlying mathematical algorithms, that may differ radically from traditional approaches. Even at the smaller workstation or cluster level of computation, the massive concurrency and heterogeneity within each processor will impact computational scientists. Through the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), we have initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. In my talk, I will discuss these challenges, and what it will mean for exascale-era electronic structure, molecular dynamics, and engineering-scale simulations of shock-compressed condensed matter. In particular, we anticipate that the emerging hierarchical, heterogeneous architectures can be exploited to achieve higher physical fidelity simulations using adaptive physics refinement. This work is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research.

  2. An Incentive-based Online Optimization Framework for Distribution Grids

    DOE PAGES

    Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun; ...

    2017-10-09

    This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  3. An Incentive-based Online Optimization Framework for Distribution Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun

    This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  4. A new parallelization scheme for adaptive mesh refinement

    DOE PAGES

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.; ...

    2016-05-06

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  5. A new parallelization scheme for adaptive mesh refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  6. An Integrated Approach to Locality-Conscious Processor Allocation and Scheduling of Mixed-Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.

    2009-08-01

    Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less

  7. ARTIST: A fully automated artifact rejection algorithm for single-pulse TMS-EEG data.

    PubMed

    Wu, Wei; Keller, Corey J; Rogasch, Nigel C; Longwell, Parker; Shpigel, Emmanuel; Rolle, Camarin E; Etkin, Amit

    2018-04-01

    Concurrent single-pulse TMS-EEG (spTMS-EEG) is an emerging noninvasive tool for probing causal brain dynamics in humans. However, in addition to the common artifacts in standard EEG data, spTMS-EEG data suffer from enormous stimulation-induced artifacts, posing significant challenges to the extraction of neural information. Typically, neural signals are analyzed after a manual time-intensive and often subjective process of artifact rejection. Here we describe a fully automated algorithm for spTMS-EEG artifact rejection. A key step of this algorithm is to decompose the spTMS-EEG data into statistically independent components (ICs), and then train a pattern classifier to automatically identify artifact components based on knowledge of the spatio-temporal profile of both neural and artefactual activities. The autocleaned and hand-cleaned data yield qualitatively similar group evoked potential waveforms. The algorithm achieves a 95% IC classification accuracy referenced to expert artifact rejection performance, and does so across a large number of spTMS-EEG data sets (n = 90 stimulation sites), retains high accuracy across stimulation sites/subjects/populations/montages, and outperforms current automated algorithms. Moreover, the algorithm was superior to the artifact rejection performance of relatively novice individuals, who would be the likely users of spTMS-EEG as the technique becomes more broadly disseminated. In summary, our algorithm provides an automated, fast, objective, and accurate method for cleaning spTMS-EEG data, which can increase the utility of TMS-EEG in both clinical and basic neuroscience settings. © 2018 Wiley Periodicals, Inc.

  8. Algorithmic formulation of control problems in manipulation

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.

    1975-01-01

    The basic characteristics of manipulator control algorithms are discussed. The state of the art in the development of manipulator control algorithms is briefly reviewed. Different end-point control techniques are described together with control algorithms which operate on external sensor (imaging, proximity, tactile, and torque/force) signals in realtime. Manipulator control development at JPL is briefly described and illustrated with several figures. The JPL work pays special attention to the front or operator input end of the control algorithms.

  9. ENGAGE: Guided Activity-Based Gaming in Neurorehabilitation after Stroke: A Pilot Study

    PubMed Central

    Reinthal, Ann; Szirony, Kathy; Clark, Cindy; Swiers, Jeffrey; Kellicker, Michelle; Linder, Susan

    2012-01-01

    Introduction. Stroke is a leading cause of disability in healthy adults. The purpose of this pilot study was to assess the feasibility and outcomes of a novel video gaming repetitive practice paradigm, (ENGAGE) enhanced neurorehabilitation: guided activity-based gaming exercise. Methods. Sixteen individuals at least three months after stroke served as participants. All participants received concurrent outpatient therapy or took part in a stroke exercise class and completed at least 500 minutes of gaming. Primary baseline and posttest outcome measures included the Wolf motor function test (WMFT) and the Fugl-Meyer assessment (FMA). ENGAGE uses a game selection algorithm providing focused, graded activity-based repetitive practice that is highly individualized and directed. The Wilcoxon signed ranks test was used to determine statistical significance. Results. There were improvements in the WMFT (P = 0.003) and the FMA (P = 0.002) that exceeded established values of minimal clinically important difference. Conclusions. ENGAGE was feasible and an effective adjunct to concurrent therapy after stroke. PMID:22593835

  10. Generalized Symbolic Execution for Model Checking and Testing

    NASA Technical Reports Server (NTRS)

    Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)

    2003-01-01

    Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.

  11. Toward ubiquitous healthcare services with a novel efficient cloud platform.

    PubMed

    He, Chenguang; Fan, Xiaomao; Li, Ye

    2013-01-01

    Ubiquitous healthcare services are becoming more and more popular, especially under the urgent demand of the global aging issue. Cloud computing owns the pervasive and on-demand service-oriented natures, which can fit the characteristics of healthcare services very well. However, the abilities in dealing with multimodal, heterogeneous, and nonstationary physiological signals to provide persistent personalized services, meanwhile keeping high concurrent online analysis for public, are challenges to the general cloud. In this paper, we proposed a private cloud platform architecture which includes six layers according to the specific requirements. This platform utilizes message queue as a cloud engine, and each layer thereby achieves relative independence by this loosely coupled means of communications with publish/subscribe mechanism. Furthermore, a plug-in algorithm framework is also presented, and massive semistructure or unstructured medical data are accessed adaptively by this cloud architecture. As the testing results showing, this proposed cloud platform, with robust, stable, and efficient features, can satisfy high concurrent requests from ubiquitous healthcare services.

  12. A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems

    DOE PAGES

    Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik; ...

    2017-07-25

    Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.

  13. A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik

    Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.

  14. Effectiveness of concurrent procedures during high tibial osteotomy for medial compartment osteoarthritis: a systematic review and meta-analysis.

    PubMed

    Lee, O-Sung; Ahn, Soyeon; Ahn, Jin Hwan; Teo, Seow Hui; Lee, Yong Seuk

    2018-02-01

    The purpose of this systematic review and meta-analysis was to evaluate the efficacy of concurrent cartilage procedures during high tibial osteotomy (HTO) for medial compartment osteoarthritis (OA) by comparing the outcomes of studies that directly compared the use of HTO plus concurrent cartilage procedures versus HTO alone. Results that are possible to be compared in more than two articles were presented as forest plots. A 95% confidence interval was calculated for each effect size, and we calculated the I 2 statistic, which presents the percentage of total variation attributable to the heterogeneity among studies. The random effects model was used to calculate the effect size. Seven articles were included to the final analysis. Case groups were composed of HTO without concurrent procedures and control groups were composed of HTO with concurrent procedures such as marrow stimulation procedure, mesenchymal stem cell transplantation, and injection. The case group showed a higher hospital for special surgery score and mean difference was 4.10 [I 2 80.8%, 95% confidence interval (CI) - 9.02 to 4.82]. Mean difference of the mechanical femorotibial angle in five studies was 0.08° (I 2 0%, 95% CI - 0.26 to 0.43). However, improved arthroscopic, histologic, and MRI results were reported in the control group. Our analysis support that concurrent procedures during HTO for medial compartment OA have little beneficial effect regarding clinical and radiological outcomes. However, they might have some beneficial effects in terms of arthroscopic, histologic, and MRI findings even though the quality of healed cartilage is not good as that of original cartilage. Therefore, until now, concurrent procedures for medial compartment OA have been considered optional. Nevertheless, no conclusions can be drawn for younger patients with focal cartilage defects and concomitant varus deformity. This question needs to be addressed separately.

  15. Capnography and chest wall impedance algorithms for ventilation detection during cardiopulmonary resuscitation

    PubMed Central

    Edelson, Dana P.; Eilevstjønn, Joar; Weidman, Elizabeth K.; Retzer, Elizabeth; Vanden Hoek, Terry L.; Abella, Benjamin S.

    2009-01-01

    Objective Hyperventilation is both common and detrimental during cardiopulmonary resuscitation (CPR). Chest wall impedance algorithms have been developed to detect ventilations during CPR. However, impedance signals are challenged by noise artifact from multiple sources, including chest compressions. Capnography has been proposed as an alternate method to measure ventilations. We sought to assess and compare the adequacy of these two approaches. Methods Continuous chest wall impedance and capnography were recorded during consecutive in-hospital cardiac arrests. Algorithms utilizing each of these data sources were compared to a manually determined “gold standard” reference ventilation rate. In addition, a combination algorithm, which utilized the highest of the impedance or capnography values in any given minute, was similarly evaluated. Results Data were collected from 37 cardiac arrests, yielding 438 min of data with continuous chest compressions and concurrent recording of impedance and capnography. The manually calculated mean ventilation rate was 13.3±4.3/min. In comparison, the defibrillator’s impedance-based algorithm yielded an average rate of 11.3±4.4/min (p=0.0001) while the capnography rate was 11.7±3.7/min (p=0.0009). There was no significant difference in sensitivity and positive predictive value between the two methods. The combination algorithm rate was 12.4±3.5/min (p=0.02), which yielded the highest fraction of minutes with respiratory rates within 2/min of the reference. The impedance signal was uninterpretable 19.5% of the time, compared with 9.7% for capnography. However, the signals were only simultaneously non-interpretable 0.8% of the time. Conclusions Both the impedance and capnography-based algorithms underestimated the ventilation rate. Reliable ventilation rate determination may require a novel combination of multiple algorithms during resuscitation. PMID:20036047

  16. Load Balancing Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearce, Olga Tkachyshyn

    2014-12-01

    The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one atmore » the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.« less

  17. Rapid Acquisition of Preference in Concurrent Chains: Effects of "d"-Amphetamine on Sensitivity to Reinforcement Delay

    ERIC Educational Resources Information Center

    Ta, Wei-Min; Pitts, Raymond C.; Hughes, Christine E.; McLean, Anthony P.; Grace, Randolph C.

    2008-01-01

    The purpose of this study was to examine effects of "d"-amphetamine on choice controlled by reinforcement delay. Eight pigeons responded under a concurrent-chains procedure in which one terminal-link schedule was always fixed- interval 8 s, and the other terminal-link schedule changed from session to session between fixed-interval 4 s and…

  18. Preference as a Function of Active Interresponse Times: A Test of the Active Time Model

    ERIC Educational Resources Information Center

    Misak, Paul; Cleaveland, J. Mark

    2011-01-01

    In this article, we describe a test of the active time model for concurrent variable interval (VI) choice. The active time model (ATM) suggests that the time since the most recent response is one of the variables controlling choice in concurrent VI VI schedules of reinforcement. In our experiment, pigeons were trained in a multiple concurrent…

  19. Multicenter evaluation of signalment and comorbid conditions associated with aortic thrombotic disease in dogs.

    PubMed

    Winter, Randolph L; Budke, Christine M

    2017-08-15

    OBJECTIVE To assess signalment and concurrent disease processes in dogs with aortic thrombotic disease (ATD). DESIGN Retrospective case-control study. ANIMALS Dogs examined at North American veterinary teaching hospitals from 1985 through 2011 with medical records submitted to the Veterinary Medical Database. PROCEDURES Medical records were reviewed to identify dogs with a diagnosis of ATD (case dogs). Five control dogs without a diagnosis of ATD were then identified for every case dog. Data were collected regarding dog age, sex, breed, body weight, and concurrent disease processes. RESULTS ATD was diagnosed in 291 of the 984,973 (0.03%) dogs included in the database. The odds of a dog having ATD did not differ significantly by sex, age, or body weight. Compared with mixed-breed dogs, Shetland Sheepdogs had a significantly higher odds of ATD (OR, 2.59). Protein-losing nephropathy (64/291 [22%]) was the most commonly recorded concurrent disease in dogs with ATD. CONCLUSIONS AND CLINICAL RELEVANCE Dogs with ATD did not differ significantly from dogs without ATD in most signalment variables. Contrary to previous reports, cardiac disease was not a common concurrent diagnosis in dogs with ATD.

  20. Interaction of attentional and motor control processes in handwriting.

    PubMed

    Brown, T L; Donnenwirth, E E

    1990-01-01

    The interaction between attentional capacity, motor control processes, and strategic adaptations to changing task demands was investigated in handwriting, a continuous (rather than discrete) skilled performance. Twenty-four subjects completed 12 two-minute handwriting samples under instructions stressing speeded handwriting, normal handwriting, or highly legible handwriting. For half of the writing samples, a concurrent auditory monitoring task was imposed. Subjects copied either familiar (English) or unfamiliar (Latin) passages. Writing speed, legibility ratings, errors in writing and in the secondary auditory task, and a derived measure of the average number of characters held in short-term memory during each sample ("planning unit size") were the dependent variables. The results indicated that the ability to adapt to instructions stressing speed or legibility was substantially constrained by the concurrent listening task and by text familiarity. Interactions between instructions, task concurrence, and text familiarity in the legibility ratings, combined with further analyses of planning unit size, indicated that information throughput from temporary storage mechanisms to motor processes mediated the loss of flexibility effect. Overall, the results suggest that strategic adaptations of a skilled performance to changing task circumstances are sensitive to concurrent attentional demands and that departures from "normal" or "modal" performance require attention.

  1. Concurrent and discriminant validity of the Star Excursion Balance Test for military personnel with lateral ankle sprain.

    PubMed

    Bastien, Maude; Moffet, Hélène; Bouyer, Laurent; Perron, Marc; Hébert, Luc J; Leblond, Jean

    2014-02-01

    The Star Excursion Balance Test (SEBT) has frequently been used to measure motor control and residual functional deficits at different stages of recovery from lateral ankle sprain (LAS) in various populations. However, the validity of the measure used to characterize performance--the maximal reach distance (MRD) measured by visual estimation--is still unknown. To evaluate the concurrent validity of the MRD in the SEBT estimated visually vs the MRD measured with a 3D motion-capture system and evaluate and compare the discriminant validity of 2 MRD-normalization methods (by height or by lower-limb length) in participants with or without LAS (n = 10 per group). There is a high concurrent validity and a good degree of accuracy between the visual estimation measurement and the MRD gold-standard measurement for both groups and under all conditions. The Cohen d ratios between groups and MANOVA products were higher when computed from MRD data normalized by height. The results support the concurrent validity of visual estimation of the MRD and the use of the SEBT to evaluate motor control. Moreover, normalization of MRD data by height appears to increase the discriminant validity of this test.

  2. Concurrent Schedules of Positive and Negative Reinforcement: Differential-Impact and Differential-Outcomes Hypotheses

    PubMed Central

    Magoon, Michael A; Critchfield, Thomas S

    2008-01-01

    Considerable evidence from outside of operant psychology suggests that aversive events exert greater influence over behavior than equal-sized positive-reinforcement events. Operant theory is largely moot on this point, and most operant research is uninformative because of a scaling problem that prevents aversive events and those based on positive reinforcement from being directly compared. In the present investigation, humans' mouse-click responses were maintained on similarly structured, concurrent schedules of positive (money gain) and negative (avoidance of money loss) reinforcement. Because gains and losses were of equal magnitude, according to the analytical conventions of the generalized matching law, bias (log b ≠ 0) would indicate differential impact by one type of consequence; however, no systematic bias was observed. Further research is needed to reconcile this outcome with apparently robust findings in other literatures of superior behavior control by aversive events. In an incidental finding, the linear function relating log behavior ratio and log reinforcement ratio was steeper for concurrent negative and positive reinforcement than for control conditions involving concurrent positive reinforcement. This may represent the first empirical confirmation of a free-operant differential-outcomes effect predicted by contingency-discriminability theories of choice. PMID:18683609

  3. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  4. Radiation Therapy in the Management of Head-and-Neck Cancer of Unknown Primary Origin: How Does the Addition of Concurrent Chemotherapy Affect the Therapeutic Ratio?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Allen M., E-mail: allen.chen@ucdmc.ucdavis.edu; Farwell, D. Gregory; Lau, Derick H.

    2011-10-01

    Purpose: To determine how the addition of cisplatin-based concurrent chemotherapy to radiation therapy influences outcomes among a cohort of patients treated for head-and-neck cancer of unknown primary origin. Methods and Materials: The medical records of 60 consecutive patients treated by radiation therapy for squamous cell carcinoma of the head and neck presenting as cervical lymph node metastasis of occult primary origin were reviewed. Thirty-two patients (53%) were treated by concurrent chemoradiation, and 28 patients (47%) were treated by radiation therapy alone. Forty-five patients (75%) received radiation therapy after surgical resection, and 15 patients (25%) received primary radiation therapy. Thirty-five patientsmore » (58%) were treated by intensity-modulated radiotherapy. Results: The 2-year estimates of overall survival, local-regional control, and progression-free survival were 89%, 89%, and 79%, respectively, among patients treated by chemoradiation, compared to 90%, 92%, and 83%, respectively, among patients treated by radiation therapy alone (p > 0.05, for all). Exploratory analysis failed to identify any subset of patients who benefited from the addition of concurrent chemotherapy to radiation therapy. The use of concurrent chemotherapy was associated with a significantly increased incidence of Grade 3+ acute and late toxicity (p < 0.001, for both). Conclusions: Concurrent chemoradiation is associated with significant toxicity without a clear advantage to overall survival, local-regional control, and progression-free survival in the treatment of head-and-neck cancer of unknown primary origin. Although selection bias cannot be ignored, prospective data are needed to further address this question.« less

  5. Cognitive foundations for model-based sensor fusion

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Weijers, Bertus; Mutz, Chris W.

    2003-08-01

    Target detection, tracking, and sensor fusion are complicated problems, which usually are performed sequentially. First detecting targets, then tracking, then fusing multiple sensors reduces computations. This procedure however is inapplicable to difficult targets which cannot be reliably detected using individual sensors, on individual scans or frames. In such more complicated cases one has to perform functions of fusing, tracking, and detecting concurrently. This often has led to prohibitive combinatorial complexity and, as a consequence, to sub-optimal performance as compared to the information-theoretic content of all the available data. It is well appreciated that in this task the human mind is by far superior qualitatively to existing mathematical methods of sensor fusion, however, the human mind is limited in the amount of information and speed of computation it can cope with. Therefore, research efforts have been devoted toward incorporating "biological lessons" into smart algorithms, yet success has been limited. Why is this so, and how to overcome existing limitations? The fundamental reasons for current limitations are analyzed and a potentially breakthrough research and development effort is outlined. We utilize the way our mind combines emotions and concepts in the thinking process and present the mathematical approach to accomplishing this in the current technology computers. The presentation will summarize the difficulties encountered by intelligent systems over the last 50 years related to combinatorial complexity, analyze the fundamental limitations of existing algorithms and neural networks, and relate it to the type of logic underlying the computational structure: formal, multivalued, and fuzzy logic. A new concept of dynamic logic will be introduced along with algorithms capable of pulling together all the available information from multiple sources. This new mathematical technique, like our brain, combines conceptual understanding with emotional evaluation and overcomes the combinatorial complexity of concurrent fusion, tracking, and detection. The presentation will discuss examples of performance, where computational speedups of many orders of magnitude were attained leading to performance improvements of up to 10 dB (and better).

  6. Machine Learning to Improve Energy Expenditure Estimation in Children With Disabilities: A Pilot Study in Duchenne Muscular Dystrophy.

    PubMed

    Pande, Amit; Mohapatra, Prasant; Nicorici, Alina; Han, Jay J

    2016-07-19

    Children with physical impairments are at a greater risk for obesity and decreased physical activity. A better understanding of physical activity pattern and energy expenditure (EE) would lead to a more targeted approach to intervention. This study focuses on studying the use of machine-learning algorithms for EE estimation in children with disabilities. A pilot study was conducted on children with Duchenne muscular dystrophy (DMD) to identify important factors for determining EE and develop a novel algorithm to accurately estimate EE from wearable sensor-collected data. There were 7 boys with DMD, 6 healthy control boys, and 22 control adults recruited. Data were collected using smartphone accelerometer and chest-worn heart rate sensors. The gold standard EE values were obtained from the COSMED K4b2 portable cardiopulmonary metabolic unit worn by boys (aged 6-10 years) with DMD and controls. Data from this sensor setup were collected simultaneously during a series of concurrent activities. Linear regression and nonlinear machine-learning-based approaches were used to analyze the relationship between accelerometer and heart rate readings and COSMED values. Existing calorimetry equations using linear regression and nonlinear machine-learning-based models, developed for healthy adults and young children, give low correlation to actual EE values in children with disabilities (14%-40%). The proposed model for boys with DMD uses ensemble machine learning techniques and gives a 91% correlation with actual measured EE values (root mean square error of 0.017). Our results confirm that the methods developed to determine EE using accelerometer and heart rate sensor values in normal adults are not appropriate for children with disabilities and should not be used. A much more accurate model is obtained using machine-learning-based nonlinear regression specifically developed for this target population. ©Amit Pande, Prasant Mohapatra, Alina Nicorici, Jay J Han. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 19.07.2016.

  7. Ensembles of radial basis function networks for spectroscopic detection of cervical precancer

    NASA Technical Reports Server (NTRS)

    Tumer, K.; Ramanujam, N.; Ghosh, J.; Richards-Kortum, R.

    1998-01-01

    The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, noninvasively and quantitatively probes the biochemical and morphological changes that occur in precancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337-, 380-, and 460-nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from precancerous tissue samples. The use of connectionist methods such as multilayered perceptrons, radial basis function (RBF) networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated and near real-time implementation of precancer detection in the hands of nonexperts. The results are more reliable, direct, and accurate than those achieved by either human experts or multivariate statistical algorithms.

  8. Simulating compressible-incompressible two-phase flows

    NASA Astrophysics Data System (ADS)

    Denner, Fabian; van Wachem, Berend

    2017-11-01

    Simulating compressible gas-liquid flows, e.g. air-water flows, presents considerable numerical issues and requires substantial computational resources, particularly because of the stiff equation of state for the liquid and the different Mach number regimes. Treating the liquid phase (low Mach number) as incompressible, yet concurrently considering the gas phase (high Mach number) as compressible, can improve the computational performance of such simulations significantly without sacrificing important physical mechanisms. A pressure-based algorithm for the simulation of two-phase flows is presented, in which a compressible and an incompressible fluid are separated by a sharp interface. The algorithm is based on a coupled finite-volume framework, discretised in conservative form, with a compressive VOF method to represent the interface. The bulk phases are coupled via a novel acoustically-conservative interface discretisation method that retains the acoustic properties of the compressible phase and does not require a Riemann solver. Representative test cases are presented to scrutinize the proposed algorithm, including the reflection of acoustic waves at the compressible-incompressible interface, shock-drop interaction and gas-liquid flows with surface tension. Financial support from the EPSRC (Grant EP/M021556/1) is gratefully acknowledged.

  9. Reversal of the toxic effects of cachectin by concurrent insulin administration.

    PubMed

    Fraker, D L; Merino, M J; Norton, J A

    1989-06-01

    Rats treated with recombinant human tumor necrosis factor-cachectin, 100 micrograms/kg ip twice daily for 5 consecutive days, had a 56% decrease in food intake, a 54% decrease in nitrogen balance, and a 23-g decrease in body weight gain vs. saline-treated controls. Concurrent neutral protamine hagedorn insulin administration of 2 U/100 g sc twice daily reversed all of these changes to control levels without causing any treatment deaths. The improvement seen with insulin was dose independent. Five days of cachectin treatment caused a severe interstitial pneumonitis, periportal inflammation in the liver, and an increase in wet organ weight in the heart, lungs, kidney, and spleen. Concurrent insulin treatment led to near total reversal of these histopathologic changes. Cachectin treatment did not significantly change blood glucose levels from control values of 130-140 mg/dl, but insulin plus cachectin caused a significant decrease in blood glucose from 1 through 12 h after injection. Administration of high-dose insulin can near totally reverse the nutritional and histopathologic toxicity of sublethal doses of cachectin in rats.

  10. An Evaluation of Concurrent Priority Queue Algorithms

    DTIC Science & Technology

    1991-02-01

    node is the corresponding index in the array. and node i occupies location i. The left child of node i. LCHILD(i). occupies location 2z and its right... child , RCHILD(i), occupies location 2i + 1. The parent of node i is at I- Associated with the heap are data fields lastelern and fulllevel, in which...then 14 p = rchild(p) 15 1 = -j 16 else 17 p ! child (p) 18 end 19 j/2 20 end 21 key[p] = nkey 22 end insert Figure 2.2: Insert operation on binary heap

  11. Automated Verification of Specifications with Typestates and Access Permissions

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.; Catano, Nestor

    2011-01-01

    We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).

  12. 40 CFR 798.5200 - Mouse visible specific locus test.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... control groups. (4) Control groups—(i) Concurrent controls. The use of positive or spontaneous controls is... control groups. (ii) Test chemical vehicle, doses used and rationale for dose selection, toxicity data... SUBSTANCES CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5200 Mouse...

  13. 40 CFR 798.5200 - Mouse visible specific locus test.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... control groups. (4) Control groups—(i) Concurrent controls. The use of positive or spontaneous controls is... control groups. (ii) Test chemical vehicle, doses used and rationale for dose selection, toxicity data... SUBSTANCES CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5200 Mouse...

  14. 40 CFR 798.5200 - Mouse visible specific locus test.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... control groups. (4) Control groups—(i) Concurrent controls. The use of positive or spontaneous controls is... control groups. (ii) Test chemical vehicle, doses used and rationale for dose selection, toxicity data... SUBSTANCES CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5200 Mouse...

  15. 40 CFR 798.5200 - Mouse visible specific locus test.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... control groups. (4) Control groups—(i) Concurrent controls. The use of positive or spontaneous controls is... control groups. (ii) Test chemical vehicle, doses used and rationale for dose selection, toxicity data... SUBSTANCES CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5200 Mouse...

  16. 40 CFR 798.5200 - Mouse visible specific locus test.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... control groups. (4) Control groups—(i) Concurrent controls. The use of positive or spontaneous controls is... control groups. (ii) Test chemical vehicle, doses used and rationale for dose selection, toxicity data... SUBSTANCES CONTROL ACT (CONTINUED) HEALTH EFFECTS TESTING GUIDELINES Genetic Toxicity § 798.5200 Mouse...

  17. The efficacy of protein supplementation during recovery from muscle-damaging concurrent exercise.

    PubMed

    Eddens, Lee; Browne, Sarah; Stevenson, Emma J; Sanderson, Brad; van Someren, Ken; Howatson, Glyn

    2017-07-01

    This study investigated the effect of protein supplementation on recovery following muscle-damaging exercise, which was induced with a concurrent exercise design. Twenty-four well-trained male cyclists were randomised to 3 independent groups receiving 20 g protein hydrolysate, iso-caloric carbohydrate, or low-calorific placebo supplementation, per serve. Supplement serves were provided twice daily, from the onset of the muscle-damaging exercise, for a total of 4 days and in addition to a controlled diet (6 g·kg -1 ·day -1 carbohydrate, 1.2 g·kg -1 ·day -1 protein, remainder from fat). Following the concurrent exercise session at time-point 0 h, comprising a simulated high-intensity road cycling trial and 100 drop-jumps, recovery of outcome measures was assessed at 24, 48, and 72 h. The concurrent exercise protocol was deemed to have caused exercise-induced muscle damage (EIMD), owing to time effects (p < 0.001), confirming decrements in maximal voluntary contraction (peaking at 15% ± 10%) and countermovement jump performance (peaking at 8% ± 7%), along with increased muscle soreness, creatine kinase, and C-reactive protein concentrations. No group or interaction effects (p > 0.05) were observed for any of the outcome measures. The present results indicate that protein supplementation does not attenuate any of the indirect indices of EIMD imposed by concurrent exercise, when employing great rigour around the provision of a quality habitual diet and the provision of appropriate supplemental controls.

  18. Introducing concurrency in the Gaudi data processing framework

    NASA Astrophysics Data System (ADS)

    Clemencic, Marco; Hegner, Benedikt; Mato, Pere; Piparo, Danilo

    2014-06-01

    In the past, the increasing demands for HEP processing resources could be fulfilled by the ever increasing clock-frequencies and by distributing the work to more and more physical machines. Limitations in power consumption of both CPUs and entire data centres are bringing an end to this era of easy scalability. To get the most CPU performance per watt, future hardware will be characterised by less and less memory per processor, as well as thinner, more specialized and more numerous cores per die, and rather heterogeneous resources. To fully exploit the potential of the many cores, HEP data processing frameworks need to allow for parallel execution of reconstruction or simulation algorithms on several events simultaneously. We describe our experience in introducing concurrency related capabilities into Gaudi, a generic data processing software framework, which is currently being used by several HEP experiments, including the ATLAS and LHCb experiments at the LHC. After a description of the concurrent framework and the most relevant design choices driving its development, we describe the behaviour of the framework in a more realistic environment, using a subset of the real LHCb reconstruction workflow, and present our strategy and the used tools to validate the physics outcome of the parallel framework against the results of the present, purely sequential LHCb software. We then summarize the measurement of the code performance of the multithreaded application in terms of memory and CPU usage.

  19. Concurrent validity of the Microsoft Kinect for Windows v2 for measuring spatiotemporal gait parameters.

    PubMed

    Dolatabadi, Elham; Taati, Babak; Mihailidis, Alex

    2016-09-01

    This paper presents a study to evaluate the concurrent validity of the Microsoft Kinect for Windows v2 for measuring the spatiotemporal parameters of gait. Twenty healthy adults performed several sequences of walks across a GAITRite mat under three different conditions: usual pace, fast pace, and dual task. Each walking sequence was simultaneously captured with two Kinect for Windows v2 and the GAITRite system. An automated algorithm was employed to extract various spatiotemporal features including stance time, step length, step time and gait velocity from the recorded Kinect v2 sequences. Accuracy in terms of reliability, concurrent validity and limits of agreement was examined for each gait feature under different walking conditions. The 95% Bland-Altman limits of agreement were narrow enough for the Kinect v2 to be a valid tool for measuring all reported spatiotemporal parameters of gait in all three conditions. An excellent intraclass correlation coefficient (ICC2, 1) ranging from 0.9 to 0.98 was observed for all gait measures across different walking conditions. The inter trial reliability of all gait parameters were shown to be strong for all walking types (ICC3, 1 > 0.73). The results of this study suggest that the Kinect for Windows v2 has the capacity to measure selected spatiotemporal gait parameters for healthy adults. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Prediction of concurrent endometrial carcinoma in women with endometrial hyperplasia.

    PubMed

    Matsuo, Koji; Ramzan, Amin A; Gualtieri, Marc R; Mhawech-Fauceglia, Paulette; Machida, Hiroko; Moeini, Aida; Dancz, Christina E; Ueda, Yutaka; Roman, Lynda D

    2015-11-01

    Although a fraction of endometrial hyperplasia cases have concurrent endometrial carcinoma, patient characteristics associated with concurrent malignancy are not well described. The aim of our study was to identify predictive clinico-pathologic factors for concurrent endometrial carcinoma among patients with endometrial hyperplasia. A case-control study was conducted to compare endometrial hyperplasia in both preoperative endometrial biopsy and hysterectomy specimens (n=168) and endometrial carcinoma in hysterectomy specimen but endometrial hyperplasia in preoperative endometrial biopsy (n=43). Clinico-pathologic factors were examined to identify independent risk factors of concurrent endometrial carcinoma in a multivariate logistic regression model. The most common histologic subtype in preoperative endometrial biopsy was complex hyperplasia with atypia [CAH] (n=129) followed by complex hyperplasia without atypia (n=58) and simple hyperplasia with or without atypia (n=24). The majority of endometrial carcinomas were grade 1 (86.0%) and stage I (83.7%). In multivariate analysis, age 40-59 (odds ratio [OR] 3.07, p=0.021), age≥60 (OR 6.65, p=0.005), BMI≥35kg/m(2) (OR 2.32, p=0.029), diabetes mellitus (OR 2.51, p=0.019), and CAH (OR 9.01, p=0.042) were independent predictors of concurrent endometrial carcinoma. The risk of concurrent endometrial carcinoma rose dramatically with increasing number of risk factors identified in multivariate model (none 0%, 1 risk factor 7.0%, 2 risk factors 17.6%, 3 risk factors 35.8%, and 4 risk factors 45.5%, p<0.001). Hormonal treatment was associated with decreased risk of concurrent endometrial cancer in those with ≥3 risk factors. Older age, obesity, diabetes mellitus, and CAH are predictive of concurrent endometrial carcinoma in endometrial hyperplasia patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Genetics-based control of a mimo boiler-turbine plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimeo, R.M.; Lee, K.Y.

    1994-12-31

    A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.

  2. Parallel eigenanalysis of finite element models in a completely connected architecture

    NASA Technical Reports Server (NTRS)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  3. A Review of the Quantification and Classification of Pigmented Skin Lesions: From Dedicated to Hand-Held Devices.

    PubMed

    Filho, Mercedes; Ma, Zhen; Tavares, João Manuel R S

    2015-11-01

    In recent years, the incidence of skin cancer cases has risen, worldwide, mainly due to the prolonged exposure to harmful ultraviolet radiation. Concurrently, the computer-assisted medical diagnosis of skin cancer has undergone major advances, through an improvement in the instrument and detection technology, and the development of algorithms to process the information. Moreover, because there has been an increased need to store medical data, for monitoring, comparative and assisted-learning purposes, algorithms for data processing and storage have also become more efficient in handling the increase of data. In addition, the potential use of common mobile devices to register high-resolution images of skin lesions has also fueled the need to create real-time processing algorithms that may provide a likelihood for the development of malignancy. This last possibility allows even non-specialists to monitor and follow-up suspected skin cancer cases. In this review, we present the major steps in the pre-processing, processing and post-processing of skin lesion images, with a particular emphasis on the quantification and classification of pigmented skin lesions. We further review and outline the future challenges for the creation of minimum-feature, automated and real-time algorithms for the detection of skin cancer from images acquired via common mobile devices.

  4. Mean Length of Utterance in Children with Specific Language Impairment and in Younger Control Children Shows Concurrent Validity and Stable and Parallel Growth Trajectories

    ERIC Educational Resources Information Center

    Rice, Mabel L.; Redmond, Sean M.; Hoffman, Lesa

    2006-01-01

    Purpose: Although mean length of utterance (MLU) is a useful benchmark in studies of children with specific language impairment (SLI), some empirical and interpretive issues are unresolved. The authors report on 2 studies examining, respectively, the concurrent validity and temporal stability of MLU equivalency between children with SLI and…

  5. Automated assembly of fast-axis collimation (FAC) lenses for diode laser bar modules

    NASA Astrophysics Data System (ADS)

    Miesner, Jörn; Timmermann, Andre; Meinschien, Jens; Neumann, Bernhard; Wright, Steve; Tekin, Tolga; Schröder, Henning; Westphalen, Thomas; Frischkorn, Felix

    2009-02-01

    Laser diodes and diode laser bars are key components in high power semiconductor lasers and solid state laser systems. During manufacture, the assembly of the fast axis collimation (FAC) lens is a crucial step. The goal of our activities is to design an automated assembly system for high volume production. In this paper the results of an intermediate milestone will be reported: a demonstration system was designed, realized and tested to prove the feasibility of all of the system components and process features. The demonstration system consists of a high precision handling system, metrology for process feedback, a powerful digital image processing system and tooling for glue dispensing, UV curing and laser operation. The system components as well as their interaction with each other were tested in an experimental system in order to glean design knowledge for the fully automated assembly system. The adjustment of the FAC lens is performed by a series of predefined steps monitored by two cameras concurrently imaging the far field and the near field intensity distributions. Feedback from these cameras processed by a powerful and efficient image processing algorithm control a five axis precision motion system to optimize the fast axis collimation of the laser beam. Automated cementing of the FAC to the diode bar completes the process. The presentation will show the system concept, the algorithm of the adjustment as well as experimental results. A critical discussion of the results will close the talk.

  6. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  7. Effect of the influence function of deformable mirrors on laser beam shaping.

    PubMed

    González-Núñez, Héctor; Béchet, Clémentine; Ayancán, Boris; Neichel, Benoit; Guesalaga, Andrés

    2017-02-20

    The continuous membrane stiffness of a deformable mirror propagates the deformation of the actuators beyond their neighbors. When phase-retrieval algorithms are used to determine the desired shape of these mirrors, this cross-coupling-also known as influence function (IF)-is generally disregarded. We study this problem via simulations and bench tests for different target shapes to gain further insight into the phenomenon. Sound modeling of the IF effect is achieved as highlighted by the concurrence between the modeled and experimental results. In addition, we observe that the actuators IF is a key parameter that determines the accuracy of the output light pattern. Finally, it is shown that in some cases it is possible to achieve better shaping by modifying the input irradiance of the phase-retrieval algorithm. The results obtained from this analysis open the door to further improvements in this type of beam-shaping systems.

  8. Intra-abdominal solid organ injuries: an enhanced management algorithm.

    PubMed

    Kokabi, Nima; Shuaib, Waqas; Xing, Minzhi; Harmouche, Elie; Wilson, Kenneth; Johnson, Jamlik-Omari; Khosa, Faisal

    2014-11-01

    The organ injury scale grading system proposed by the American Association for the Surgery of Trauma provides guidelines for operative versus nonoperative management in solid organ injuries; however, major shortcomings of the American Association for the Surgery of Trauma injury scale may become apparent with low-grade injuries, in which conservative management may fail. Nonoperative management of common intra-abdominal solid organ injuries relies increasingly on computed tomographic findings and other clinical factors, including patient age, presence of concurrent injuries, and serial clinical assessments. Familiarity with characteristic imaging features is essential for the prompt diagnosis and appropriate treatment of blunt abdominal trauma. In this pictorial essay, the spectrum of the American Association for the Surgery of Trauma organ injury scale grading system is illustrated, and a multidisciplinary management algorithm for common intra-abdominal solid organ injuries is proposed. Copyright © 2014 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  9. A FODO racetrack ring for nuSTORM: design and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, A.; Bross, A.; Neuffer, D.

    2017-07-01

    The goal of nuSTORM is to provide well-defined neutrino beams for precise measurements of neutrino cross-sections and oscillations. The nuSTORM decay ring is a compact racetrack storage ring with a circumference of ~ 480 m that incorporates large aperture (60 cm diameter) magnets. There are many challenges in the design. In order to incorporate the Orbit Combination section (OCS), used for injecting the pion beam into the ring, a dispersion suppressor is needed adjacent to the OCS . Concurrently, in order to maximize the number of useful muon decays, strong bending dipoles are needed in the arcs to minimize the arcmore » length. These dipoles create strong chromatic effects, which need to be corrected by nonlinear sextupole elements in the ring. In this paper, a FODO racetrack ring design and its optimization using sextupolar fields via both a Genetic Algorithm (GA) and a Simulated Annealing (SA) algorithm will be discussed.« less

  10. Testing Linear Temporal Logic Formulae on Finite Execution Traces

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Norvig, Peter (Technical Monitor)

    2001-01-01

    We present an algorithm for efficiently testing Linear Temporal Logic (LTL) formulae on finite execution traces. The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive. In most past applications of LTL. theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications. Such tests correspond to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property. We then suggest an optimized algorithm based on transforming LTL formulae. The work is done using the Maude rewriting system. which turns out to provide a perfect notation and an efficient rewriting engine for performing these experiments.

  11. Exploiting Concurrent Wake-Up Transmissions Using Beat Frequencies.

    PubMed

    Kumberg, Timo; Schindelhauer, Christian; Reindl, Leonhard

    2017-07-26

    Wake-up receivers are the natural choice for wireless sensor networks because of their ultra-low power consumption and their ability to provide communications on demand. A downside of ultra-low power wake-up receivers is their low sensitivity caused by the passive demodulation of the carrier signal. In this article, we present a novel communication scheme by exploiting purposefully-interfering out-of-tune signals of two or more wireless sensor nodes, which produce the wake-up signal as the beat frequency of superposed carriers. Additionally, we introduce a communication algorithm and a flooding protocol based on this approach. Our experiments show that our approach increases the received signal strength up to 3 dB, improving communication robustness and reliability. Furthermore, we demonstrate the feasibility of our newly-developed protocols by means of an outdoor experiment and an indoor setup consisting of several nodes. The flooding algorithm achieves almost a 100% wake-up rate in less than 20 ms.

  12. Conflict Detection Algorithm to Minimize Locking for MPI-IO Atomicity

    NASA Astrophysics Data System (ADS)

    Sehrish, Saba; Wang, Jun; Thakur, Rajeev

    Many scientific applications require high-performance concurrent I/O accesses to a file by multiple processes. Those applications rely indirectly on atomic I/O capabilities in order to perform updates to structured datasets, such as those stored in HDF5 format files. Current support for atomicity in MPI-IO is provided by locking around the operations, imposing lock overhead in all situations, even though in many cases these operations are non-overlapping in the file. We propose to isolate non-overlapping accesses from overlapping ones in independent I/O cases, allowing the non-overlapping ones to proceed without imposing lock overhead. To enable this, we have implemented an efficient conflict detection algorithm in MPI-IO using MPI file views and datatypes. We show that our conflict detection scheme incurs minimal overhead on I/O operations, making it an effective mechanism for avoiding locks when they are not needed.

  13. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  14. Fuzzy decoupling controller based on multimode control algorithm of PI-single neuron and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Xianxia; Wang, Jian; Qin, Tinggao

    2003-09-01

    Intelligent control algorithms are introduced into the control system of temperature and humidity. A multi-mode control algorithm of PI-Single Neuron is proposed for single loop control of temperature and humidity. In order to remove the coupling between temperature and humidity, a new decoupling method is presented, which is called fuzzy decoupling. The decoupling is achieved by using a fuzzy controller that dynamically modifies the static decoupling coefficient. Taking the control algorithm of PI-Single Neuron as the single loop control of temperature and humidity, the paper provides the simulated output response curves with no decoupling control, static decoupling control and fuzzy decoupling control. Those control algorithms are easily implemented in singlechip-based hardware systems.

  15. Effects of endurance, resistance, and concurrent exercise on learning and memory after morphine withdrawal in rats.

    PubMed

    Zarrinkalam, Ebrahim; Heidarianpour, Ali; Salehi, Iraj; Ranjbar, Kamal; Komaki, Alireza

    2016-07-15

    Continuous morphine consumption contributes to the development of cognitive disorders. This work investigates the impacts of different types of exercise on learning and memory in morphine-dependent rats. Forty morphine-dependent rats were randomly divided into five groups: sedentary-dependent (Sed-D), endurance exercise-dependent (En-D), strength exercise-dependent (St-D), and combined (concurrent) exercise-dependent (Co-D). Healthy rats were used as controls (Con). After 10weeks of regular exercise (endurance, strength, and concurrent; each five days per week), spatial and aversive learning and memory were assessed using the Morris water maze and shuttle box tests. The results showed that morphine addiction contributes to deficits in spatial learning and memory. Furthermore, each form of exercise training restored spatial learning and memory performance in morphine-dependent rats to levels similar to those of healthy controls. Aversive learning and memory during the acquisition phase were not affected by morphine addiction or exercise, but were significantly decreased by morphine dependence. Only concurrent training returned the time spent in the dark compartment in the shuttle box test to control levels. These findings show that different types of exercise exert similar effects on spatial learning and memory, but show distinct effects on aversive learning and memory. Further, morphine dependence-induced deficits in cognitive function were blocked by exercise. Therefore, different exercise regimens may represent practical treatment methods for cognitive and behavioral impairments associated with morphine-related disease. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Control of equipment isolation system using wavelet-based hybrid sliding mode control

    NASA Astrophysics Data System (ADS)

    Huang, Shieh-Kung; Loh, Chin-Hsiung

    2017-04-01

    Critical non-structural equipment, including life-saving equipment in hospitals, circuit breakers, computers, high technology instrumentations, etc., is vulnerable to strong earthquakes, and on top of that, the failure of the vibration-sensitive equipment will cause severe economic loss. In order to protect vibration-sensitive equipment or machinery against strong earthquakes, various innovative control algorithms are developed to compensate the internal forces that to be applied. These new or improved control strategies, such as the control algorithms based on optimal control theory and sliding mode control (SMC), are also developed for structures engineering as a key element in smart structure technology. The optimal control theory, one of the most common methodologies in feedback control, finds control forces through achieving a certain optimal criterion by minimizing a cost function. For example, the linear-quadratic regulator (LQR) was the most popular control algorithm over the past three decades, and a number of modifications have been proposed to increase the efficiency of classical LQR algorithm. However, except to the advantage of simplicity and ease of implementation, LQR are susceptible to parameter uncertainty and modeling error due to complex nature of civil structures. Different from LQR control, a robust and easy to be implemented control algorithm, SMC has also been studied. SMC is a nonlinear control methodology that forces the structural system to slide along surfaces or boundaries; hence this control algorithm is naturally robust with respect to parametric uncertainties of a structure. Early attempts at protecting vibration-sensitive equipment were based on the use of existing control algorithms as described above. However, in recent years, researchers have tried to renew the existing control algorithms or developing a new control algorithm to adapt the complex nature of civil structures which include the control of both structures and non-structural components. The aim of this paper is to develop a hybrid control algorithm on the control of both structures and equipments simultaneously to overcome the limitations of classical feedback control through combining the advantage of classic LQR and SMC. To suppress vibrations with the frequency contents of strong earthquakes differing from the natural frequencies of civil structures, the hybrid control algorithms integrated with the wavelet-base vibration control algorithm is developed. The performance of classical, hybrid, and wavelet-based hybrid control algorithms as well as the responses of structure and non-structural components are evaluated and discussed through numerical simulation in this study.

  17. Compatibility of ASO3-adjuvanted H1N1pdm09 and seasonal trivalent influenza vaccines in adults: results of a randomized, controlled trial.

    PubMed

    Scheifele, David W; Ward, Brian J; Dionne, Marc; Vanderkooi, Otto G; Loeb, Mark; Coleman, Brenda L; Li, Yan

    2012-07-06

    When Canada chose a novel adjuvanted vaccine to combat the 2009 influenza pandemic, seasonal trivalent inactivated vaccine (TIV) was also available but compatibility of the two had not been assessed. To compare responses after concurrent or sequential administration of these vaccines, adults 20-59 years old were randomly assigned (1:1) to receive ASO3-adjuvanted H1N1pdm09 vaccine (Arepanrix, GSK, Quebec City, Quebec), with TIV (Vaxigrip, Sanofi Pasteur, Toronto) given concurrently or 21 days later. Blood was obtained at baseline and 21 days after each vaccination to measure hemagglutination inhibition (HAI) titers. Adverse effects were assessed using symptom diaries and personal interviews. 282 participants completed the study (concurrent vaccines 145, sequential vaccines 137). HAI titers to H1N1pdm09 were ≥ 40 at baseline in 15-18% of participants and following vaccination in 91-92%. Initially seropositive subjects (titer ≥ 10) had lower H1N1pdm09 geometric mean HAI titers (GMT) after concurrent than separate vaccinations (320.0 vs 476.5, p=0.039) but both exceeded GM responses of initially naïve participants, which were unaffected by concurrent TIV. Responses to TIV were not lower after concurrent than separate vaccination. Adverse event rates were not increased by concurrent vaccinations above those with H1N1pdm09 vaccine alone. This adjuvanted H1N1pdm09 vaccine was immunogenic and compatible with concurrently administered TIV. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. 40 CFR 798.2450 - Inhalation toxicity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... study. (2) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group. Except for treatment with the test substance, animals in the control group... generate an appropriate concentration of the substance in the atmosphere, a vehicle control group shall be...

  19. 40 CFR 798.2450 - Inhalation toxicity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... study. (2) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group. Except for treatment with the test substance, animals in the control group... generate an appropriate concentration of the substance in the atmosphere, a vehicle control group shall be...

  20. 40 CFR 798.2450 - Inhalation toxicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... study. (2) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group. Except for treatment with the test substance, animals in the control group... generate an appropriate concentration of the substance in the atmosphere, a vehicle control group shall be...

  1. 40 CFR 798.2450 - Inhalation toxicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... study. (2) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group. Except for treatment with the test substance, animals in the control group... generate an appropriate concentration of the substance in the atmosphere, a vehicle control group shall be...

  2. 40 CFR 798.2450 - Inhalation toxicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... study. (2) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group. Except for treatment with the test substance, animals in the control group... generate an appropriate concentration of the substance in the atmosphere, a vehicle control group shall be...

  3. Bipedal gait model for precise gait recognition and optimal triggering in foot drop stimulator: a proof of concept.

    PubMed

    Shaikh, Muhammad Faraz; Salcic, Zoran; Wang, Kevin I-Kai; Hu, Aiguo Patrick

    2018-03-10

    Electrical stimulators are often prescribed to correct foot drop walking. However, commercial foot drop stimulators trigger inappropriately under certain non-gait scenarios. Past researches addressed this limitation by defining stimulation control based on automaton of a gait cycle executed by foot drop of affected limb/foot only. Since gait is a collaborative activity of both feet, this research highlights the role of normal foot for robust gait detection and stimulation triggering. A novel bipedal gait model is proposed where gait cycle is realized as an automaton based on concurrent gait sub-phases (states) from each foot. The input for state transition is fused information from feet-worn pressure and inertial sensors. Thereafter, a bipedal gait model-based stimulation control algorithm is developed. As a feasibility study, bipedal gait model and stimulation control are evaluated in real-time simulation manner on normal and simulated foot drop gait measurements from 16 able-bodied participants with three speed variations, under inappropriate triggering scenarios and with foot drop rehabilitation exercises. Also, the stimulation control employed in commercial foot drop stimulators and single foot gait-based foot drop stimulators are compared alongside. Gait detection accuracy (98.9%) and precise triggering under all investigations prove bipedal gait model reliability. This infers that gait detection leveraging bipedal periodicity is a promising strategy to rectify prevalent stimulation triggering deficiencies in commercial foot drop stimulators. Graphical abstract Bipedal information-based gait recognition and stimulation triggering.

  4. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    PubMed

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  5. Development of model reference adaptive control theory for electric power plant control applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mabius, L.E.

    1982-09-15

    The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less

  6. [Concurrent chemoradiation in lung cancer].

    PubMed

    Girard, Nicolas; Mornex, Françoise

    2005-12-01

    Concurrent chemoradiation has become for the 15 last years the standard treatment for locally advanced non-small cell lung cancer, either as a definite therapy in non resectable tumors, or in a neoadjuvant setting in potentially resectable tumors. Associating sequential and concurrent schedules, by administering chemotherapy before or after concurrent chemoradiation, has been recently investigated, but the best sequence remains a matter of controversy. Increasing local control and survival after definite chemoradiation seems possible not only by using optimized radiation fractionation schedules and escalated total doses, but also by associating more convenient and less toxic chemotherapy agents at the right cytotoxic or radio-sensitizing dose. Moreover, recent data have suggested that surgery following induction chemoradiation is feasible and effective in selected patients without mediastinal nodes involvement, if a complete resection can be performed. In patients with localized small cell lung cancer, early concurrent chemoradiation with platinium and etoposide has been recognized as the state-of-the-art treatment. The increasing number of ongoing trials including modern radiation schedules combined with newer chemotherapy agents shows that chemoradiation is one of the most promising therapeutic strategies in thoracic oncology.

  7. The effects of a concurrent task on walking in persons with transfemoral amputation compared to persons without limb loss.

    PubMed

    Morgan, Sara J; Hafner, Brian J; Kelly, Valerie E

    2016-08-01

    Many people with lower limb loss report the need to concentrate on walking. This may indicate increased reliance on cognitive resources when walking compared to individuals without limb loss. This study quantified changes in walking associated with addition of a concurrent cognitive task in persons with transfemoral amputation using microprocessor knees compared to age- and sex-matched controls. Observational, cross-sectional study. Quantitative motion analysis was used to assess walking under both single-task (walking alone) and dual-task (walking while performing a cognitive task) conditions. Primary outcomes were walking speed, step width, step time asymmetry, and cognitive task response latency and accuracy. Repeated-measures analysis of variance was used to examine the effects of task (single-task and dual-task) and group (transfemoral amputation and control) for each outcome. No significant interactions between task and group were observed (all p > 0.11) indicating that a cognitive task did not differentially affect walking between groups. However, walking was slower with wider steps and more asymmetry in people with transfemoral amputation compared to controls under both conditions. Although there were significant differences in walking between people with transfemoral amputation and matched controls, the effects of a concurrent cognitive task on walking were similar between groups. The addition of a concurrent task did not differentially affect walking outcomes in people with and without transfemoral amputation. However, compared to people without limb loss, people with transfemoral amputation adopted a conservative walking strategy. This strategy may reduce the need to concentrate on walking but also contributed to notable gait deviations. © The International Society for Prosthetics and Orthotics 2015.

  8. High-dose versus standard-dose radiotherapy with concurrent chemotherapy in stages II-III esophageal cancer.

    PubMed

    Suh, Yang-Gun; Lee, Ik Jae; Koom, Wong Sub; Cha, Jihye; Lee, Jong Young; Kim, Soo Kon; Lee, Chang Geol

    2014-06-01

    In this study, we investigated the effects of radiotherapy ≥60 Gy in the setting of concurrent chemo-radiotherapy for treating patients with Stages II-III esophageal cancer. A total of 126 patients treated with 5-fluorouracilbased concurrent chemo-radiotherapy between January 1998 and February 2008 were retrospectively reviewed. Among these patients, 49 received a total radiation dose of <60 Gy (standard-dose group), while 77 received a total radiation dose of ≥60 Gy (high-dose group). The median doses in the standard- and high-dose groups were 54 Gy (range, 45-59.4 Gy) and 63 Gy (range, 60-81 Gy), respectively. The high-dose group showed significantly improved locoregional control (2-year locoregional control rate, 69 versus 32%, P < 0.01) and progression-free survival (2-year progression-free survival, 47 versus 20%, P = 0.01) than the standard-dose group. Median overall survival in the high- and the standard-dose groups was 28 and 18 months, respectively (P = 0.26). In multivariate analysis, 60 Gy or higher radiotherapy was a significant prognostic factor for improved locoregional control, progression-free survival and overall survival. No significant differences were found in frequencies of late radiation pneumonitis, post-treatment esophageal stricture or treatment-related mortality between the two groups. High-dose radiotherapy of 60 Gy or higher with concurrent chemotherapy improved locoregional control and progression-free survival without a significant increase of in treatment-related toxicity in patients with Stages II-III esophageal cancer. Our study could provide the basis for future randomized clinical trials. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls

    PubMed Central

    Chae, Jeongsook; Jin, Yong; Sung, Yunsick

    2018-01-01

    Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%. PMID:29324641

  10. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth; Geveci, Berk

    2014-11-01

    The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipelinemore » model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.« less

  11. 46 CFR 62.30-5 - Independence.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Reliability and Safety Criteria, All Automated Vital Systems § 62.30-5 Independence. (a) Single non-concurrent failures in control, alarm, or instrumentation systems, and their logical consequences, must not prevent...)(2) and (b)(3) of this section, primary control, alternate control, safety control, and alarm and...

  12. 46 CFR 62.30-5 - Independence.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Reliability and Safety Criteria, All Automated Vital Systems § 62.30-5 Independence. (a) Single non-concurrent failures in control, alarm, or instrumentation systems, and their logical consequences, must not prevent...)(2) and (b)(3) of this section, primary control, alternate control, safety control, and alarm and...

  13. 46 CFR 62.30-5 - Independence.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Reliability and Safety Criteria, All Automated Vital Systems § 62.30-5 Independence. (a) Single non-concurrent failures in control, alarm, or instrumentation systems, and their logical consequences, must not prevent...)(2) and (b)(3) of this section, primary control, alternate control, safety control, and alarm and...

  14. 46 CFR 62.30-5 - Independence.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Reliability and Safety Criteria, All Automated Vital Systems § 62.30-5 Independence. (a) Single non-concurrent failures in control, alarm, or instrumentation systems, and their logical consequences, must not prevent...)(2) and (b)(3) of this section, primary control, alternate control, safety control, and alarm and...

  15. 46 CFR 62.30-5 - Independence.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Reliability and Safety Criteria, All Automated Vital Systems § 62.30-5 Independence. (a) Single non-concurrent failures in control, alarm, or instrumentation systems, and their logical consequences, must not prevent...)(2) and (b)(3) of this section, primary control, alternate control, safety control, and alarm and...

  16. 40 CFR 798.2650 - Oral toxicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group or, if a vehicle is used in administering the test substance, a vehicle control group. If... vehicle control groups are required. (3) Satellite group. (Rodent) A satellite group of 20 animals (10...

  17. 40 CFR 798.2650 - Oral toxicity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group or, if a vehicle is used in administering the test substance, a vehicle control group. If... vehicle control groups are required. (3) Satellite group. (Rodent) A satellite group of 20 animals (10...

  18. 40 CFR 798.2650 - Oral toxicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group or, if a vehicle is used in administering the test substance, a vehicle control group. If... vehicle control groups are required. (3) Satellite group. (Rodent) A satellite group of 20 animals (10...

  19. 40 CFR 798.2650 - Oral toxicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group or, if a vehicle is used in administering the test substance, a vehicle control group. If... vehicle control groups are required. (3) Satellite group. (Rodent) A satellite group of 20 animals (10...

  20. 40 CFR 798.2650 - Oral toxicity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group or, if a vehicle is used in administering the test substance, a vehicle control group. If... vehicle control groups are required. (3) Satellite group. (Rodent) A satellite group of 20 animals (10...

  1. Concurrent use of magnetic bearings for rotor support and force sensing for the nondestructive evaluation of manufacturing processes

    NASA Astrophysics Data System (ADS)

    Kasarda, Mary; Imlach, Joseph; Balaji, P. A.; Marshall, Jeremy T.

    2000-06-01

    Active magnetic bearings are a proven technology in turbomachinery applications and they offer considerable promise for improving the performance of manufacturing processes. The Active Magnetic Bearing (AMB) is a feedback mechanism that supports a spinning shaft by levitating it in a magnetic field. AMBs have significantly higher surface speed capability than rolling element bearings and they eliminate the potential for product contamination by eliminating the requirement for bearing lubrication. In addition, one of the most promising capabilities for manufacturing applications is the ability of the AMB to act concurrently as both a support bearing and non-invasive force sensor. The feedback nature of the AMB allows for its use as a load cell to continuously measure shaft forces necessary for levitation based on information about the magnetic flux density in the air gaps. This measurement capability may be exploited to improve the process control of such products as textile fibers and photographic films where changes in shaft loads may indicate changes in product quality. This paper discusses the operation of AMBs and their potential benefits in manufacturing equipment along with results from research addressing accurate AMB force sensing performance in field applications. Specifically, results from the development of enhanced AMB measurement algorithms to better account for magnetic fringing and leakage effects to improve the accuracy of this technique are presented. Results from the development of a new on-line calibration procedure for robust in-situ calibration of AMBs in a field application such as a manufacturing plant scenario are also presented including results of Magnetic Finite Element Analysis (MFEA) verification of the procedure.

  2. LG tools for asymmetric wargaming

    NASA Astrophysics Data System (ADS)

    Stilman, Boris; Yakhnis, Alex; Yakhnis, Vladimir

    2002-07-01

    Asymmetric operations represent conflict where one of the sides would apply military power to influence the political and civil environment, to facilitate diplomacy, and to interrupt specified illegal activities. This is a special type of conflict where the participants do not initiate full-scale war. Instead, the sides may be engaged in a limited open conflict or one or several sides may covertly engage another side using unconventional or less conventional methods of engagement. They may include peace operations, combating terrorism, counterdrug operations, arms control, support of insurgencies or counterinsurgencies, show of force. An asymmetric conflict can be represented as several concurrent interlinked games of various kinds: military, transportation, economic, political, etc. Thus, various actions of peace violators, terrorists, drug traffickers, etc., can be expressed via moves in different interlinked games. LG tools allow us to fully capture the specificity of asymmetric conflicts employing the major LG concept of hypergame. Hypergame allows modeling concurrent interlinked processes taking place in geographically remote locations at different levels of resolution and time scale. For example, it allows us to model an antiterrorist operation taking place simultaneously in a number of countries around the globe and involving wide range of entities from individuals to combat units to governments. Additionally, LG allows us to model all sides of the conflict at their level of sophistication. Intelligent stakeholders are represented by means of LG generated intelligent strategies. TO generate those strategies, in addition to its own mathematical intelligence, the LG algorithm may incorporate the intelligence of the top-level experts in the respective problem domains. LG models the individual differences between intelligent stakeholders. The LG tools make it possible to incorporate most of the known traits of a stakeholder, i.e., real personalities involved in the conflict with their specific individual style.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Gang; Hu Wei; Wang Jianhua

    Purpose: To investigate the feasibility and efficacy of concurrent chemoradiation in combination with erlotinib for locally advanced esophageal carcinoma. Methods and Materials: Twenty-four patients with locally advanced esophageal carcinoma were treated with concurrent chemoradiotherapy. A daily fraction of 2.0 Gy was prescribed to a total dose of 60 Gy over 6 weeks. Concurrent paclitaxel (135 mg/m{sup 2}, d{sub 1}) and cisplatin (20 mg/m{sup 2}, d{sub 1-3}) were administered on Day 1 and Day 29 of the radiotherapy. Erlotinib, an oral epidermal growth factor receptor-tyrosine kinase inhibitor, was taken by every patient at the dose of 150 mg daily during themore » chemoradiotherapy. Results: The median follow-up of the 24 patients was 18.6 months (range, 7.1-29.6 months). The 2-year overall survival, local-regional control, and relapse-free survival were 70.1% (95% CI, 50.4-90%), 87.5% (95% CI, 73.5-100%), and 57.4% (95% CI, 36.3-78.7%), respectively. During the chemoradiotheapy, the incidences of acute toxicities of Grade 3 or greater, such as leucopenia and thrombocytopenia, were 16.7 % (4/24) and 8.3% (2/24). Conclusions: Application of concurrent chemoradiotherapy in combination with erlotinib for locally advanced esophageal carcinoma yielded satisfactory 2-year overall survival and local-regional control. The toxicities were well tolerated.« less

  4. Research on intelligent algorithm of electro - hydraulic servo control system

    NASA Astrophysics Data System (ADS)

    Wang, Yannian; Zhao, Yuhui; Liu, Chengtao

    2017-09-01

    In order to adapt the nonlinear characteristics of the electro-hydraulic servo control system and the influence of complex interference in the industrial field, using a fuzzy PID switching learning algorithm is proposed and a fuzzy PID switching learning controller is designed and applied in the electro-hydraulic servo controller. The designed controller not only combines the advantages of the fuzzy control and PID control, but also introduces the learning algorithm into the switching function, which makes the learning of the three parameters in the switching function can avoid the instability of the system during the switching between the fuzzy control and PID control algorithms. It also makes the switch between these two control algorithm more smoother than that of the conventional fuzzy PID.

  5. State Tracking and Fault Diagnosis for Dynamic Systems Using Labeled Uncertainty Graph.

    PubMed

    Zhou, Gan; Feng, Wenquan; Zhao, Qi; Zhao, Hongbo

    2015-11-05

    Cyber-physical systems such as autonomous spacecraft, power plants and automotive systems become more vulnerable to unanticipated failures as their complexity increases. Accurate tracking of system dynamics and fault diagnosis are essential. This paper presents an efficient state estimation method for dynamic systems modeled as concurrent probabilistic automata. First, the Labeled Uncertainty Graph (LUG) method in the planning domain is introduced to describe the state tracking and fault diagnosis processes. Because the system model is probabilistic, the Monte Carlo technique is employed to sample the probability distribution of belief states. In addition, to address the sample impoverishment problem, an innovative look-ahead technique is proposed to recursively generate most likely belief states without exhaustively checking all possible successor modes. The overall algorithms incorporate two major steps: a roll-forward process that estimates system state and identifies faults, and a roll-backward process that analyzes possible system trajectories once the faults have been detected. We demonstrate the effectiveness of this approach by applying it to a real world domain: the power supply control unit of a spacecraft.

  6. Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models

    PubMed Central

    Liu, Ziyue; Cappola, Anne R.; Crofford, Leslie J.; Guo, Wensheng

    2013-01-01

    The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls. PMID:24729646

  7. Potential controls of isoprene in the surface ocean

    NASA Astrophysics Data System (ADS)

    Hackenberg, S. C.; Andrews, S. J.; Airs, R.; Arnold, S. R.; Bouman, H. A.; Brewin, R. J. W.; Chance, R. J.; Cummings, D.; Dall'Olmo, G.; Lewis, A. C.; Minaeian, J. K.; Reifel, K. M.; Small, A.; Tarran, G. A.; Tilstone, G. H.; Carpenter, L. J.

    2017-04-01

    Isoprene surface ocean concentrations and vertical distribution, atmospheric mixing ratios, and calculated sea-to-air fluxes spanning approximately 125° of latitude (80°N-45°S) over the Arctic and Atlantic Oceans are reported. Oceanic isoprene concentrations were associated with a number of concurrently monitored biological variables including chlorophyll a (Chl a), photoprotective pigments, integrated primary production (intPP), and cyanobacterial cell counts, with higher isoprene concentrations relative to all respective variables found at sea surface temperatures greater than 20°C. The correlation between isoprene and the sum of photoprotective carotenoids, which is reported here for the first time, was the most consistent across all cruises. Parameterizations based on linear regression analyses of these relationships perform well for Arctic and Atlantic data, producing a better fit to observations than an existing Chl a-based parameterization. Global extrapolation of isoprene surface water concentrations using satellite-derived Chl a and intPP reproduced general trends in the in situ data and absolute values within a factor of 2 between 60% and 85%, depending on the data set and algorithm used.

  8. Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models.

    PubMed

    Liu, Ziyue; Cappola, Anne R; Crofford, Leslie J; Guo, Wensheng

    2014-01-01

    The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls.

  9. An AES chip with DPA resistance using hardware-based random order execution

    NASA Astrophysics Data System (ADS)

    Bo, Yu; Xiangyu, Li; Cong, Chen; Yihe, Sun; Liji, Wu; Xiangmin, Zhang

    2012-06-01

    This paper presents an AES (advanced encryption standard) chip that combats differential power analysis (DPA) side-channel attack through hardware-based random order execution. Both decryption and encryption procedures of an AES are implemented on the chip. A fine-grained dataflow architecture is proposed, which dynamically exploits intrinsic byte-level independence in the algorithm. A novel circuit called an HMF (Hold-Match-Fetch) unit is proposed for random control, which randomly sets execution orders for concurrent operations. The AES chip was manufactured in SMIC 0.18 μm technology. The average energy for encrypting one group of plain texts (128 bits secrete keys) is 19 nJ. The core area is 0.43 mm2. A sophisticated experimental setup was built to test the DPA resistance. Measurement-based experimental results show that one byte of a secret key cannot be disclosed from our chip under random mode after 64000 power traces were used in the DPA attack. Compared with the corresponding fixed order execution, the hardware based random order execution is improved by at least 21 times the DPA resistance.

  10. Boiler-turbine control system design using a genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimeo, R.; Lee, K.Y.

    1995-12-01

    This paper discusses the application of a genetic algorithm to control system design for a boiler-turbine plant. In particular the authors study the ability of the genetic algorithm to develop a proportional-integral (PI) controller and a state feedback controller for a non-linear multi-input/multi-output (MIMO) plant model. The plant model is presented along with a discussion of the inherent difficulties in such controller development. A sketch of the genetic algorithm (GA) is presented and its strategy as a method of control system design is discussed. Results are presented for two different control systems that have been designed with the genetic algorithm.

  11. Development of a measurement approach to assess time children participate in organized sport, active travel, outdoor active play, and curriculum-based physical activity.

    PubMed

    Borghese, Michael M; Janssen, Ian

    2018-03-22

    Children participate in four main types of physical activity: organized sport, active travel, outdoor active play, and curriculum-based physical activity. The objective of this study was to develop a valid approach that can be used to concurrently measure time spent in each of these types of physical activity. Two samples (sample 1: n = 50; sample 2: n = 83) of children aged 10-13 wore an accelerometer and a GPS watch continuously over 7 days. They also completed a log where they recorded the start and end times of organized sport sessions. Sample 1 also completed an outdoor time log where they recorded the times they went outdoors and a description of the outdoor activity. Sample 2 also completed a curriculum log where they recorded times they participated in physical activity (e.g., physical education) during class time. We describe the development of a measurement approach that can be used to concurrently assess the time children spend participating in specific types of physical activity. The approach uses a combination of data from accelerometers, GPS, and activity logs and relies on merging and then processing these data using several manual (e.g., data checks and cleaning) and automated (e.g., algorithms) procedures. In the new measurement approach time spent in organized sport is estimated using the activity log. Time spent in active travel is estimated using an existing algorithm that uses GPS data. Time spent in outdoor active play is estimated using an algorithm (with a sensitivity and specificity of 85%) that was developed using data collected in sample 1 and which uses all of the data sources. Time spent in curriculum-based physical activity is estimated using an algorithm (with a sensitivity of 78% and specificity of 92%) that was developed using data collected in sample 2 and which uses accelerometer data collected during class time. There was evidence of excellent intra- and inter-rater reliability of the estimates for all of these types of physical activity when the manual steps were duplicated. This novel measurement approach can be used to estimate the time that children participate in different types of physical activity.

  12. A comparison of force control algorithms for robots in contact with flexible environments

    NASA Technical Reports Server (NTRS)

    Wilfinger, Lee S.

    1992-01-01

    In order to perform useful tasks, the robot end-effector must come into contact with its environment. For such tasks, force feedback is frequently used to control the interaction forces. Control of these forces is complicated by the fact that the flexibility of the environment affects the stability of the force control algorithm. Because of the wide variety of different materials present in everyday environments, it is necessary to gain an understanding of how environmental flexibility affects the stability of force control algorithms. This report presents the theory and experimental results of two force control algorithms: Position Accommodation Control and Direct Force Servoing. The implementation of each of these algorithms on a two-arm robotic test bed located in the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE) is discussed in detail. The behavior of each algorithm when contacting materials of different flexibility is experimentally determined. In addition, several robustness improvements to the Direct Force Servoing algorithm are suggested and experimentally verified. Finally, a qualitative comparison of the force control algorithms is provided, along with a description of a general tuning process for each control method.

  13. Racial-ethnic differences in all-cause and HIV mortality, Florida, 2000–2011

    PubMed Central

    Trepka, Mary Jo; Fennie, Kristopher P.; Sheehan, Diana M.; Niyonsenga, Theophile; Lieb, Spencer; Maddox, Lorene M.

    2016-01-01

    Purpose We compared all-cause and human immunodeficiency virus (HIV) mortality in a population-based, HIV-infected cohort. Methods Using records of people diagnosed with HIV during 2000–2009 from the Florida Enhanced HIV/Acquired Immunodeficiency Syndrome (AIDS) Reporting System, we conducted a proportional hazards analysis for all-cause mortality and a competing risk analysis for HIV mortality through 2011 controlling for individual level factors, neighborhood poverty, and rural/urban status and stratifying by concurrent AIDS status (AIDS within 3 months of HIV diagnosis). Results Of 59,880 HIV-infected people, 32.2% had concurrent AIDS, and 19.3% died. Adjusting for period of diagnosis, age group, sex, country of birth, HIV transmission mode, area level poverty and rural/urban status, non-Hispanic Black (NHB) and Hispanic people had an elevated adjusted hazards ratio (aHR) for HIV mortality relative to non-Hispanic whites (NHB concurrent AIDS: aHR 1.34, 95% CI 1.23–1.47; NHB without concurrent AIDS: aHR 1.41, 95% CI 1.26–1.57; Hispanic concurrent AIDS: aHR 1.18, 95% CI 1.05–1.32; Hispanic without concurrent AIDS: aHR 1.18, 95% CI 1.03–1.36). Conclusions Considering competing causes of death, NHB and Hispanic people had a higher risk of HIV mortality even among those without concurrent AIDS, indicating a need to identify and address barriers to HIV care in these populations. PMID:26948103

  14. A cognitive approach to classifying perceived behaviors

    NASA Astrophysics Data System (ADS)

    Benjamin, Dale Paul; Lyons, Damian

    2010-04-01

    This paper describes our work on integrating distributed, concurrent control in a cognitive architecture, and using it to classify perceived behaviors. We are implementing the Robot Schemas (RS) language in Soar. RS is a CSP-type programming language for robotics that controls a hierarchy of concurrently executing schemas. The behavior of every RS schema is defined using port automata. This provides precision to the semantics and also a constructive means of reasoning about the behavior and meaning of schemas. Our implementation uses Soar operators to build, instantiate and connect port automata as needed. Our approach is to use comprehension through generation (similar to NLSoar) to search for ways to construct port automata that model perceived behaviors. The generality of RS permits us to model dynamic, concurrent behaviors. A virtual world (Ogre) is used to test the accuracy of these automata. Soar's chunking mechanism is used to generalize and save these automata. In this way, the robot learns to recognize new behaviors.

  15. Expected Reachability-Time Games

    NASA Astrophysics Data System (ADS)

    Forejt, Vojtěch; Kwiatkowska, Marta; Norman, Gethin; Trivedi, Ashutosh

    In an expected reachability-time game (ERTG) two players, Min and Max, move a token along the transitions of a probabilistic timed automaton, so as to minimise and maximise, respectively, the expected time to reach a target. These games are concurrent since at each step of the game both players choose a timed move (a time delay and action under their control), and the transition of the game is determined by the timed move of the player who proposes the shorter delay. A game is turn-based if at any step of the game, all available actions are under the control of precisely one player. We show that while concurrent ERTGs are not always determined, turn-based ERTGs are positionally determined. Using the boundary region graph abstraction, and a generalisation of Asarin and Maler's simple function, we show that the decision problems related to computing the upper/lower values of concurrent ERTGs, and computing the value of turn-based ERTGs are decidable and their complexity is in NEXPTIME ∩ co-NEXPTIME.

  16. Training Recurrent Neural Networks With the Levenberg-Marquardt Algorithm for Optimal Control of a Grid-Connected Converter.

    PubMed

    Fu, Xingang; Li, Shuhui; Fairbank, Michael; Wunsch, Donald C; Alonso, Eduardo

    2015-09-01

    This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications.

  17. A parallel algorithm for viewshed analysis in three-dimensional Digital Earth

    NASA Astrophysics Data System (ADS)

    Feng, Wang; Gang, Wang; Deji, Pan; Yuan, Liu; Liuzhong, Yang; Hongbo, Wang

    2015-02-01

    Viewshed analysis, often supported by geographic information systems, is widely used in the three-dimensional (3D) Digital Earth system. Many of the analyzes involve the siting of features and real-timedecision-making. Viewshed analysis is usually performed at a large scale, which poses substantial computational challenges, as geographic datasets continue to become increasingly large. Previous research on viewshed analysis has been generally limited to a single data structure (i.e., DEM), which cannot be used to analyze viewsheds in complicated scenes. In this paper, a real-time algorithm for viewshed analysis in Digital Earth is presented using the parallel computing of graphics processing units (GPUs). An occlusion for each geometric entity in the neighbor space of the viewshed point is generated according to line-of-sight. The region within the occlusion is marked by a stencil buffer within the programmable 3D visualization pipeline. The marked region is drawn with red color concurrently. In contrast to traditional algorithms based on line-of-sight, the new algorithm, in which the viewshed calculation is integrated with the rendering module, is more efficient and stable. This proposed method of viewshed generation is closer to the reality of the virtual geographic environment. No DEM interpolation, which is seen as a computational burden, is needed. The algorithm was implemented in a 3D Digital Earth system (GeoBeans3D) with the DirectX application programming interface (API) and has been widely used in a range of applications.

  18. Concurrent chemo-radiotherapy with S-1 as an alternative therapy for elderly Chinese patients with non-metastatic esophageal squamous cancer: evidence based on a systematic review and meta-analysis.

    PubMed

    Song, Guo-Min; Tian, Xu; Liu, Xiao-Ling; Chen, Hui; Zhou, Jian-Guo; Bian, Wei; Chen, Wei-Qing

    2017-06-06

    This systematic review and meta-analysis aims to systematically assess the effects of concurrent chemo-radiotherapy (CRT) compared with radiotherapy (RT) alone for elderly Chinese patients with non-metastatic esophageal squamous cancer. We searched PubMed, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL), China Biomedical Literature Database (CBM), and China National Knowledge Infrastructure (CNKI) databases. We retrieved randomized controlled trials on concurrent CRT with Gimeraciland Oteracil Porassium (S-1) compared with RT alone for aged Chinese patients with non-metastatic esophageal squamous cancer performed until August 2016. Eight eligible studies involving 536 patients were subjected to meta-analysis. As a response rate measure, a relative risk (RR) of 1.37 [95% confidence intervals (CIs): 1.24, 1.53; P = 0.00], which reached statistical significance, was estimated when concurrent CRT with S-1 was performed compared with RT alone. Sensitivity analysis on response rate confirmed the robustness of the pooled result. The RR values of 1.44 (95% CIs: 1.22, 1.70; P = 0.00) and 1.77 (95% CIs: 1.26, 2.48; P = 0.00) estimated for 1- and 2-year survival rate indices, respectively, were also statistically significant. The incidence of adverse events was similar in both groups. This review concluded that concurrent CRT with S-1 can improve the efficacy and prolong the survival period of elderly Chinese patients with non-metastatic esophageal squamous cancer and does not significantly increase the acute adverse effects of RT alone.

  19. Concurrent and aerobic exercise training promote similar benefits in body composition and metabolic profiles in obese adolescents.

    PubMed

    Monteiro, Paula Alves; Chen, Kong Y; Lira, Fabio Santos; Saraiva, Bruna Thamyres Cicotti; Antunes, Barbara Moura Mello; Campos, Eduardo Zapaterra; Freitas, Ismael Forte

    2015-11-26

    The prevalence of obesity in pediatric population is increasing at an accelerated rate in many countries, and has become a major public health concern. Physical activity, particularly exercise training, remains to be a cornerstone of pediatric obesity interventions. The purpose of our current randomized intervention trial was to compare the effects of two types of training matched for training volume, aerobic and concurrent, on body composition and metabolic profile in obese adolescents. Thus the aim of the study was compare the effects of two types of training matched for training volume, aerobic and concurrent, on body composition and metabolic profile in obese adolescents. 32 obese adolescents participated in two randomized training groups, concurrent or aerobic, for 20 weeks (50 mins x 3 per week, supervised), and were compared to a 16-subject control group. We measured the percentage body fat (%BF, primary outcome), fat-free mass, percentage of android fat by dual energy x-ray absorptiometry, and others metabolic profiles at baseline and after interventions, and compared them between groups using the Intent-to-treat design. In 20 weeks, both exercise training groups significantly reduced %BF by 2.9-3.6% as compare to no change in the control group (p = 0.042). There were also positive changes in lipid levels in exercise groups. No noticeable changes were found between aerobic and concurrent training groups. The benefits of exercise in reducing body fat and metabolic risk profiles can be achieved by performing either type of training in obese adolescents. RBR-4HN597.

  20. Outcome of dialectical behaviour therapy for concurrent eating and substance use disorders.

    PubMed

    Courbasson, Christine; Nishikawa, Yasunori; Dixon, Lauren

    2012-09-01

    The current study examined the preliminary efficacy of dialectical behaviour therapy (DBT) adapted for concurrent eating disorders (EDs) and substance use disorders (SUDs). A matched randomized controlled trial was carried out with 25 female outpatients diagnosed with concurrent ED and SUD. Participants randomized to the intervention condition received DBT, whereas those randomized to the control condition received treatment as usual (TAU), both for a period of 1 year. A series of measures related to disordered eating, substance use and depression were administered to the participants at the beginning of treatment and at 3, 6, 9 and 12 months into treatment, followed by 3-month and 6-month follow-up assessments. Participants randomized to the DBT condition evidenced a superior retention rate relative to their counterparts in the TAU condition at various study time points, including post-treatment (80% versus 20%) and follow-up (60% versus 20%). Due to the unexpected elevated dropout rates and the worsening of ED-SUD symptomatology in the TAU condition, recruitment efforts were terminated early. Results from the DBT condition revealed that the intervention had a significant positive effect on behavioural and attitudinal features of disordered eating, substance use severity and use, negative mood regulation and depressive symptoms. Finally, increases in participants' perceived ability to regulate and cope with negative emotional states were significantly associated with decreases in emotional eating and increases in levels of confidence in ability to resist urges for substance use. Results suggest that the adapted DBT might hold promise for treating individuals with concurrent ED and SUD. The current study is the first study to report positive effects of DBT on individuals with concurrent eating and substance use disorders. Although the results require replication and extension, they suggest that the DBT may be promising for this population. The results suggest that clinicians treating individuals with concurrent eating and substance use problems should be particularly cautious of poor treatment retention and treatment complications. The results bear upon the highly salient and important issue of whether individuals with concurrent substance use need to be excluded from research studies and treatment programmes. Copyright © 2011 John Wiley & Sons, Ltd.

  1. [Research on magnetic coupling centrifugal blood pump control based on a self-tuning fuzzy PI algorithm].

    PubMed

    Yang, Lei; Yang, Ming; Xu, Zihao; Zhuang, Xiaoqi; Wang, Wei; Zhang, Haibo; Han, Lu; Xu, Liang

    2014-10-01

    The purpose of this paper is to report the research and design of control system of magnetic coupling centrifugal blood pump in our laboratory, and to briefly describe the structure of the magnetic coupling centrifugal blood pump and principles of the body circulation model. The performance of blood pump is not only related to materials and structure, but also depends on the control algorithm. We studied the algorithm about motor current double-loop control for brushless DC motor. In order to make the algorithm adjust parameter change in different situations, we used the self-tuning fuzzy PI control algorithm and gave the details about how to design fuzzy rules. We mainly used Matlab Simulink to simulate the motor control system to test the performance of algorithm, and briefly introduced how to implement these algorithms in hardware system. Finally, by building the platform and conducting experiments, we proved that self-tuning fuzzy PI control algorithm could greatly improve both dynamic and static performance of blood pump and make the motor speed and the blood pump flow stable and adjustable.

  2. Adaptive control in the presence of unmodeled dynamics. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rohrs, C. E.

    1982-01-01

    Stability and robustness properties of a wide class of adaptive control algorithms in the presence of unmodeled dynamics and output disturbances were investigated. The class of adaptive algorithms considered are those commonly referred to as model reference adaptive control algorithms, self-tuning controllers, and dead beat adaptive controllers, developed for both continuous-time systems and discrete-time systems. A unified analytical approach was developed to examine the class of existing adaptive algorithms. It was discovered that all existing algorithms contain an infinite gain operator in the dynamic system that defines command reference errors and parameter errors; it is argued that such an infinite gain operator appears to be generic to all adaptive algorithms, whether they exhibit explicit or implicit parameter identification. It is concluded that none of the adaptive algorithms considered can be used with confidence in a practical control system design, because instability will set in with a high probability.

  3. Delayed matching to sample and concurrent learning in nonamnesic humans with alcohol dependence.

    PubMed

    Bowden, S C; Benedikt, R; Ritter, A J

    1992-05-01

    Small samples of alcohol-dependent subjects who showed no clinical signs of Wernicke-Korsakoff syndrome were compared with nonalcohol-dependent controls on two animal memory tests which are performed poorly by human amnesics. Compared to the control subjects, the alcohol-dependent subjects' performance was impaired on a version of the delayed matching to sample task. On concurrent discrimination learning the overall group difference just failed to reach significance. The results are interpreted as suggesting that behavioural impairment may occur in alcohol-dependent subjects who are not clinically amnesic, and that the impairment is similar in type to that observed in cases of severe Wernicke-Korsakoff syndrome.

  4. Initial validation of two opiate craving questionnaires the obsessive compulsive drug use scale and the desires for drug questionnaire.

    PubMed

    Franken, Ingmar H A; Hendriksa, Vincent M; van den Brink, Wim

    2002-01-01

    In the present study, the factor structure, internal consistency, and the concurrent validity of two heroin craving questionnaires are examined. The Desires for Drug Questionnaire (DDQ) measures three factors: desire and intention, negative reinforcement, and control. The Obsessive Compulsive Drug Use Scale (OCDUS) also measures three factors: thoughts about heroin and interference, desire and control, and resistance to thoughts and intention. Subjects were 102 Dutch patients who were currently in treatment for drug dependency. All proposed scales have good reliability and concurrent validity. Implementation of these instruments in both clinical and research field is advocated.

  5. Modeling of dialogue regimes of distance robot control

    NASA Astrophysics Data System (ADS)

    Larkin, E. V.; Privalov, A. N.

    2017-02-01

    Process of distance control of mobile robots is investigated. Petri-Markov net for modeling of dialogue regime is worked out. It is shown, that sequence of operations of next subjects: a human operator, a dialogue computer and an onboard computer may be simulated with use the theory of semi-Markov processes. From the semi-Markov process of the general form Markov process was obtained, which includes only states of transaction generation. It is shown, that a real transaction flow is the result of «concurrency» in states of Markov process. Iteration procedure for evaluation of transaction flow parameters, which takes into account effect of «concurrency», is proposed.

  6. Psychosis and concurrent impulse control disorder in Parkinson's disease: A review based on a case report.

    PubMed

    Guedes, Bruno Fukelmann; Gonçalves, Marcia Rubia; Cury, Rubens Gisbert

    2016-01-01

    Psychosis, impulse control disorders (e.g., pathological gambling and hypersexuality) and repetitive behaviors such as punding are known psychiatric complications of Parkinson's disease (PD). Impulsive, compulsive and repetitive behaviors are strongly associated with dopamine-replacement therapy. We present the case of a 58-year-old man with PD and a myriad of psychiatric symptoms. Concurrent psychosis, punding and pathological gambling developed more than six years after the introduction of pramipexole and ceased shortly after the addition of quetiapine and discontinuation of pramipexole. This report emphasizes the importance of monitoring for a wide array of psychiatric symptoms in patients on dopamine replacement therapy.

  7. Effect of helminth-induced immunity on infections with microbial pathogens

    PubMed Central

    2016-01-01

    Helminth infections are ubiquitous worldwide and can trigger potent immune responses that differ from and potentially antagonize host protective responses to microbial pathogens. In this Review we focus on the three main killers in infectious disease—AIDS, tuberculosis and malaria—and critically assesses whether helminths adversely influence host control of these diseases. We also discuss emerging concepts for how M2 macrophages and helminth-modulated dendritic cells can potentially influence the protective immune response to concurrent infections. Finally, we present evidence advocating for more efforts to determine how and to what extent helminths interfere with the successful control of specific concurrent coinfections. PMID:24145791

  8. Novel bio-inspired smart control for hazard mitigation of civil structures

    NASA Astrophysics Data System (ADS)

    Kim, Yeesock; Kim, Changwon; Langari, Reza

    2010-11-01

    In this paper, a new bio-inspired controller is proposed for vibration mitigation of smart structures subjected to ground disturbances (i.e. earthquakes). The control system is developed through the integration of a brain emotional learning (BEL) algorithm with a proportional-integral-derivative (PID) controller and a semiactive inversion (Inv) algorithm. The BEL algorithm is based on the neurologically inspired computational model of the amygdala and the orbitofrontal cortex. To demonstrate the effectiveness of the proposed hybrid BEL-PID-Inv control algorithm, a seismically excited building structure equipped with a magnetorheological (MR) damper is investigated. The performance of the proposed hybrid BEL-PID-Inv control algorithm is compared with that of passive, PID, linear quadratic Gaussian (LQG), and BEL control systems. In the simulation, the robustness of the hybrid BEL-PID-Inv control algorithm in the presence of modeling uncertainties as well as external disturbances is investigated. It is shown that the proposed hybrid BEL-PID-Inv control algorithm is effective in improving the dynamic responses of seismically excited building structure-MR damper systems.

  9. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  10. Automated Pressure Injury Risk Assessment System Incorporated Into an Electronic Health Record System.

    PubMed

    Jin, Yinji; Jin, Taixian; Lee, Sun-Mi

    Pressure injury risk assessment is the first step toward preventing pressure injuries, but traditional assessment tools are time-consuming, resulting in work overload and fatigue for nurses. The objectives of the study were to build an automated pressure injury risk assessment system (Auto-PIRAS) that can assess pressure injury risk using data, without requiring nurses to collect or input additional data, and to evaluate the validity of this assessment tool. A retrospective case-control study and a system development study were conducted in a 1,355-bed university hospital in Seoul, South Korea. A total of 1,305 pressure injury patients and 5,220 nonpressure injury patients participated for the development of a risk scoring algorithm: 687 and 2,748 for the validation of the algorithm and 237 and 994 for validation after clinical implementation, respectively. A total of 4,211 pressure injury-related clinical variables were extracted from the electronic health record (EHR) systems to develop a risk scoring algorithm, which was validated and incorporated into the EHR. That program was further evaluated for predictive and concurrent validity. Auto-PIRAS, incorporated into the EHR system, assigned a risk assessment score of high, moderate, or low and displayed this on the Kardex nursing record screen. Risk scores were updated nightly according to 10 predetermined risk factors. The predictive validity measures of the algorithm validation stage were as follows: sensitivity = .87, specificity = .90, positive predictive value = .68, negative predictive value = .97, Youden index = .77, and the area under the receiver operating characteristic curve = .95. The predictive validity measures of the Braden Scale were as follows: sensitivity = .77, specificity = .93, positive predictive value = .72, negative predictive value = .95, Youden index = .70, and the area under the receiver operating characteristic curve = .85. The kappa of the Auto-PIRAS and Braden Scale risk classification result was .73. The predictive performance of the Auto-PIRAS was similar to Braden Scale assessments conducted by nurses. Auto-PIRAS is expected to be used as a system that assesses pressure injury risk automatically without additional data collection by nurses.

  11. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  12. 40 CFR 798.3260 - Chronic toxicity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... dose groups and in the controls should be low to permit a meaningful evaluation of the results. For non... meaningful and valid statistical evaluation of chronic effects. (2) Control groups. (i) A concurrent control group is suggested. This group should be an untreated or sham treated control group or, if a vehicle is...

  13. 40 CFR 798.2250 - Dermal toxicity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... animals scheduled to be sacrificed before completion of the study. (2) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group or, if a vehicle is used in administering the test substance, a vehicle control group. If the toxic properties of the...

  14. 40 CFR 798.3260 - Chronic toxicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... dose groups and in the controls should be low to permit a meaningful evaluation of the results. For non... meaningful and valid statistical evaluation of chronic effects. (2) Control groups. (i) A concurrent control group is suggested. This group should be an untreated or sham treated control group or, if a vehicle is...

  15. 40 CFR 798.2250 - Dermal toxicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... animals scheduled to be sacrificed before completion of the study. (2) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group or, if a vehicle is used in administering the test substance, a vehicle control group. If the toxic properties of the...

  16. 40 CFR 798.2250 - Dermal toxicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... animals scheduled to be sacrificed before completion of the study. (2) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group or, if a vehicle is used in administering the test substance, a vehicle control group. If the toxic properties of the...

  17. 40 CFR 798.2250 - Dermal toxicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... animals scheduled to be sacrificed before completion of the study. (2) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group or, if a vehicle is used in administering the test substance, a vehicle control group. If the toxic properties of the...

  18. 40 CFR 798.2250 - Dermal toxicity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... animals scheduled to be sacrificed before completion of the study. (2) Control groups. A concurrent control group is required. This group shall be an untreated or sham-treated control group or, if a vehicle is used in administering the test substance, a vehicle control group. If the toxic properties of the...

  19. 40 CFR 798.3260 - Chronic toxicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... dose groups and in the controls should be low to permit a meaningful evaluation of the results. For non... meaningful and valid statistical evaluation of chronic effects. (2) Control groups. (i) A concurrent control group is suggested. This group should be an untreated or sham treated control group or, if a vehicle is...

  20. 40 CFR 798.3260 - Chronic toxicity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... dose groups and in the controls should be low to permit a meaningful evaluation of the results. For non... meaningful and valid statistical evaluation of chronic effects. (2) Control groups. (i) A concurrent control group is suggested. This group should be an untreated or sham treated control group or, if a vehicle is...

  1. 40 CFR 798.3260 - Chronic toxicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... dose groups and in the controls should be low to permit a meaningful evaluation of the results. For non... meaningful and valid statistical evaluation of chronic effects. (2) Control groups. (i) A concurrent control group is suggested. This group should be an untreated or sham treated control group or, if a vehicle is...

  2. 40 CFR 799.9630 - TSCA developmental neurotoxicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (2) Control group. A concurrent control group is required. This group must be a sham-treated group or, if a vehicle is used in administering the test substance, a vehicle control group. The vehicle must neither be developmentally toxic nor have effects on reproduction. Animals in the control group must be...

  3. 40 CFR 799.9630 - TSCA developmental neurotoxicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (2) Control group. A concurrent control group is required. This group must be a sham-treated group or, if a vehicle is used in administering the test substance, a vehicle control group. The vehicle must neither be developmentally toxic nor have effects on reproduction. Animals in the control group must be...

  4. 40 CFR 799.9630 - TSCA developmental neurotoxicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (2) Control group. A concurrent control group is required. This group must be a sham-treated group or, if a vehicle is used in administering the test substance, a vehicle control group. The vehicle must neither be developmentally toxic nor have effects on reproduction. Animals in the control group must be...

  5. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    DOE PAGES

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...

    2015-05-22

    The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less

  6. Appraisal and coping styles account for the effects of temperament on preadolescent adjustment

    PubMed Central

    Thompson, Stephanie F.; Zalewski, Maureen; Lengua, Liliana J.

    2014-01-01

    Temperament, appraisal, and coping are known to underlie emotion regulation, yet less is known about how these processes relate to each other across time. We examined temperamental fear, frustration, effortful control, and impulsivity, positive and threat appraisals, and active and avoidant coping as processes underpinning the emotion regulation of pre-adolescent children managing stressful events. Appraisal and coping styles were tested as mediators of the longitudinal effects of temperamental emotionality and self-regulation on adjustment using a community sample (N=316) of preadolescent children (8–12 years at T1) studied across one year. High threat appraisals were concurrently related to high fear and impulsivity, whereas effortful control predicted relative decreases in threat appraisal. High fear was concurrently related to high positive appraisal, and impulsivity predicted increases in positive appraisal. Fear was concurrently related to greater avoidant coping, and impulsivity predicted increases in avoidance. Frustration predicted decreases in active coping. These findings suggest temperament, or dispositional aspects of reactivity and regulation, relates to concurrent appraisal and coping processes and additionally predicts change in these processes. Significant indirect effects indicated that appraisal and coping mediated the effects of temperament on adjustment. Threat appraisal mediated the effects of fear and effortful control on internalizing and externalizing problems, and avoidant coping mediated the effect of impulsivity on internalizing problems. These mediated effects suggest that one pathway through which temperament influences adjustment is pre-adolescents’ appraisal and coping. Findings highlight temperament, appraisal and coping as emotion regulation processes relevant to children’s adjustment in response to stress. PMID:25821237

  7. Do procedures for verbal reporting of thinking have to be reactive? A meta-analysis and recommendations for best reporting methods.

    PubMed

    Fox, Mark C; Ericsson, K Anders; Best, Ryan

    2011-03-01

    Since its establishment, psychology has struggled to find valid methods for studying thoughts and subjective experiences. Thirty years ago, Ericsson and Simon (1980) proposed that participants can give concurrent verbal expression to their thoughts (think aloud) while completing tasks without changing objectively measurable performance (accuracy). In contrast, directed requests for concurrent verbal reports, such as explanations or directions to describe particular kinds of information, were predicted to change thought processes as a consequence of the need to generate this information, thus altering performance. By comparing performance of concurrent verbal reporting conditions with their matching silent control condition, Ericsson and Simon found several studies demonstrating that directed verbalization was associated with changes in performance. In contrast, the lack of effects of thinking aloud was merely suggested by a handful of experimental studies. In this article, Ericsson and Simon's model is tested by a meta-analysis of 94 studies comparing performance while giving concurrent verbalizations to a matching condition without verbalization. Findings based on nearly 3,500 participants show that the "think-aloud" effect size is indistinguishable from zero (r = -.03) and that this procedure remains nonreactive even after statistically controlling additional factors such as task type (primarily visual or nonvisual). In contrast, procedures that entail describing or explaining thoughts and actions are significantly reactive, leading to higher performance than silent control conditions. All verbal reporting procedures tend to increase times to complete tasks. These results suggest that think-aloud should be distinguished from other methods in future studies. Theoretical and practical implications are discussed. (c) 2011 APA, all rights reserved.

  8. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  9. A Linguistic Truth-Valued Temporal Reasoning Formalism and Its Implementation

    NASA Astrophysics Data System (ADS)

    Lu, Zhirui; Liu, Jun; Augusto, Juan C.; Wang, Hui

    Temporality and uncertainty are important features of many real world systems. Solving problems in such systems requires the use of formal mechanism such as logic systems, statistical methods or other reasoning and decision-making methods. In this paper, we propose a linguistic truth-valued temporal reasoning formalism to enable the management of both features concurrently using a linguistic truth valued logic and a temporal logic. We also provide a backward reasoning algorithm which allows the answering of user queries. A simple but realistic scenario in a smart home application is used to illustrate our work.

  10. Optimization of the computational load of a hypercube supercomputer onboard a mobile robot.

    PubMed

    Barhen, J; Toomarian, N; Protopopescu, V

    1987-12-01

    A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of singleneuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as prec xdence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.

  11. Optimization of the computational load of a hypercube supercomputer onboard a mobile robot

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Toomarian, N.; Protopopescu, V.

    1987-01-01

    A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of single-neuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as precedence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.

  12. Adaptive Control Strategies for Flexible Robotic Arm

    NASA Technical Reports Server (NTRS)

    Bialasiewicz, Jan T.

    1996-01-01

    The control problem of a flexible robotic arm has been investigated. The control strategies that have been developed have a wide application in approaching the general control problem of flexible space structures. The following control strategies have been developed and evaluated: neural self-tuning control algorithm, neural-network-based fuzzy logic control algorithm, and adaptive pole assignment algorithm. All of the above algorithms have been tested through computer simulation. In addition, the hardware implementation of a computer control system that controls the tip position of a flexible arm clamped on a rigid hub mounted directly on the vertical shaft of a dc motor, has been developed. An adaptive pole assignment algorithm has been applied to suppress vibrations of the described physical model of flexible robotic arm and has been successfully tested using this testbed.

  13. Cloud cover and solar disk state estimation using all-sky images: deep neural networks approach compared to routine methods

    NASA Astrophysics Data System (ADS)

    Krinitskiy, Mikhail; Sinitsyn, Alexey

    2017-04-01

    Shortwave radiation is an important component of surface heat budget over sea and land. To estimate them accurate observations of cloud conditions are needed including total cloud cover, spatial and temporal cloud structure. While massively observed visually, for building accurate SW radiation parameterizations cloud structure needs also to be quantified using precise instrumental measurements. While there already exist several state of the art land-based cloud-cameras that satisfy researchers needs, their major disadvantages are associated with inaccuracy of all-sky images processing algorithms which typically result in the uncertainties of 2-4 octa of cloud cover estimates with the resulting true-scoring cloud cover accuracy of about 7%. Moreover, none of these algorithms determine cloud types. We developed an approach for cloud cover and structure estimating, which provides much more accurate estimates and also allows for measuring additional characteristics. This method is based on the synthetic controlling index, namely the "grayness rate index", that we introduced in 2014. Since then this index has already demonstrated high efficiency being used along with the technique namely the "background sunburn effect suppression", to detect thin clouds. This made it possible to significantly increase the accuracy of total cloud cover estimation in various sky image states using this extension of routine algorithm type. Errors for the cloud cover estimates significantly decreased down resulting the mean squared error of about 1.5 octa. Resulting true-scoring accuracy is more than 38%. The main source of this approach uncertainties is the solar disk state determination errors. While the deep neural networks approach lets us to estimate solar disk state with 94% accuracy, the final result of total cloud estimation still isn`t satisfying. To solve this problem completely we applied the set of machine learning algorithms to the problem of total cloud cover estimation directly. The accuracy of this approach varies depending on algorithm choice. Deep neural networks demonstrated the best accuracy of more than 96%. We will demonstrate some approaches and the most influential statistical features of all-sky images that lets the algorithm reach that high accuracy. With the use of our new optical package a set of over 480`000 samples has been collected in several sea missions in 2014-2016 along with concurrent standard human observed and instrumentally recorded meteorological parameters. We will demonstrate the results of the field measurements and will discuss some still remaining problems and the potential of the further developments of machine learning approach.

  14. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domino, Stefan P.; Ananthan, Shreyas; Knaus, Robert C.

    The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability onmore » many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.« less

  15. First-order logic theory for manipulating clinical practice guidelines applied to comorbid patients: a case study.

    PubMed

    Michalowski, Martin; Wilk, Szymon; Tan, Xing; Michalowski, Wojtek

    2014-01-01

    Clinical practice guidelines (CPGs) implement evidence-based medicine designed to help generate a therapy for a patient suffering from a single disease. When applied to a comorbid patient, the concurrent combination of treatment steps from multiple CPGs is susceptible to adverse interactions in the resulting combined therapy (i.e., a therapy established according to all considered CPGs). This inability to concurrently apply CPGs has been shown to be one of the key shortcomings of CPG uptake in a clinical setting1. Several research efforts are underway to address this issue such as the K4CARE2 and GuideLine INteraction Detection Assistant (GLINDA)3 projects and our previous research on applying constraint logic programming to developing a consistent combined therapy for a comorbid patient4. However, there is no generalized framework for mitigation that effectively captures general characteristics of the problem while handling nuances such as time and ordering requirements imposed by specific CPGs. In this paper we propose a first-order logic-based (FOL) approach for developing a generalized framework of mitigation. This approach uses a meta-algorithm and entailment properties to mitigate (i.e., identify and address) adverse interactions introduced by concurrently applied CPGs. We use an illustrative case study of a patient suffering from type 2 diabetes being treated for an onset of severe rheumatoid arthritis to show the expressiveness and robustness of our proposed FOL-based approach, and we discuss its appropriateness as the basis for the generalized theory.

  16. Discrete event performance prediction of speculatively parallel temperature-accelerated dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamora, Richard James; Voter, Arthur F.; Perez, Danny

    Due to its unrivaled ability to predict the dynamical evolution of interacting atoms, molecular dynamics (MD) is a widely used computational method in theoretical chemistry, physics, biology, and engineering. Despite its success, MD is only capable of modeling time scales within several orders of magnitude of thermal vibrations, leaving out many important phenomena that occur at slower rates. The Temperature Accelerated Dynamics (TAD) method overcomes this limitation by thermally accelerating the state-to-state evolution captured by MD. Due to the algorithmically complex nature of the serial TAD procedure, implementations have yet to improve performance by parallelizing the concurrent exploration of multiplemore » states. Here we utilize a discrete event-based application simulator to introduce and explore a new Speculatively Parallel TAD (SpecTAD) method. We investigate the SpecTAD algorithm, without a full-scale implementation, by constructing an application simulator proxy (SpecTADSim). Finally, following this method, we discover that a nontrivial relationship exists between the optimal SpecTAD parameter set and the number of CPU cores available at run-time. Furthermore, we find that a majority of the available SpecTAD boost can be achieved within an existing TAD application using relatively simple algorithm modifications.« less

  17. Discrete event performance prediction of speculatively parallel temperature-accelerated dynamics

    DOE PAGES

    Zamora, Richard James; Voter, Arthur F.; Perez, Danny; ...

    2016-12-01

    Due to its unrivaled ability to predict the dynamical evolution of interacting atoms, molecular dynamics (MD) is a widely used computational method in theoretical chemistry, physics, biology, and engineering. Despite its success, MD is only capable of modeling time scales within several orders of magnitude of thermal vibrations, leaving out many important phenomena that occur at slower rates. The Temperature Accelerated Dynamics (TAD) method overcomes this limitation by thermally accelerating the state-to-state evolution captured by MD. Due to the algorithmically complex nature of the serial TAD procedure, implementations have yet to improve performance by parallelizing the concurrent exploration of multiplemore » states. Here we utilize a discrete event-based application simulator to introduce and explore a new Speculatively Parallel TAD (SpecTAD) method. We investigate the SpecTAD algorithm, without a full-scale implementation, by constructing an application simulator proxy (SpecTADSim). Finally, following this method, we discover that a nontrivial relationship exists between the optimal SpecTAD parameter set and the number of CPU cores available at run-time. Furthermore, we find that a majority of the available SpecTAD boost can be achieved within an existing TAD application using relatively simple algorithm modifications.« less

  18. Model reference adaptive control of robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo

    1991-01-01

    This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.

  19. Safety effects of exclusive and concurrent signal phasing for pedestrian crossing.

    PubMed

    Zhang, Yaohua; Mamun, Sha A; Ivan, John N; Ravishanker, Nalini; Haque, Khademul

    2015-10-01

    This paper describes the estimation of pedestrian crash count and vehicle interaction severity prediction models for a sample of signalized intersections in Connecticut with either concurrent or exclusive pedestrian phasing. With concurrent phasing, pedestrians cross at the same time as motor vehicle traffic in the same direction receives a green phase, while with exclusive phasing, pedestrians cross during their own phase when all motor vehicle traffic on all approaches is stopped. Pedestrians crossing at each intersection were observed and classified according to the severity of interactions with motor vehicles. Observation intersections were selected to represent both types of signal phasing while controlling for other physical characteristics. In the nonlinear mixed models for interaction severity, pedestrians crossing on the walk signal at an exclusive signal experienced lower interaction severity compared to those crossing on the green light with concurrent phasing; however, pedestrians crossing on a green light where an exclusive phase was available experienced higher interaction severity. Intersections with concurrent phasing have fewer total pedestrian crashes than those with exclusive phasing but more crashes at higher severity levels. It is recommended that exclusive pedestrian phasing only be used at locations where pedestrians are more likely to comply. Copyright © 2015. Published by Elsevier Ltd.

  20. 40 CFR 798.5955 - Heritable translocation test in drosophila melanogaster.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Drosophila stocks may also be used. (4) Control groups. (i) Concurrent positive and negative (vehicle... size of the negative (vehicle) control group should be determined by the availability of appropriate... defined parameters. The spontaneous mutant frequency observed in the appropriate control group will...

Top