Science.gov

Sample records for algorithm execution time

  1. Execution Time Optimization Analysis on Multiple Algorithms Performance of Moving Object Edge Detection

    NASA Astrophysics Data System (ADS)

    Islam, Syed Zahurul; Islam, Syed Zahidul; Jidin, Razali; Ali, Mohd. Alauddin Mohd.

    2010-06-01

    Computer vision and digital image processing comprises varieties of applications, where some of these used in image processing include convolution, edge detection as well as contrast enhancement. This paper analyzes execution time optimization analysis between Sobel and Canny image processing algorithms in terms of moving objects edge detection. Sobel and Canny edge detection algorithms have been described with pseudo code and detailed flow chart and implemented in C and MATLAB respectively on different platforms to evaluate performance and execution time for moving cars. It is shown that Sobel algorithm is very effective in case of moving multiple cars and blurs images with efficient execution time. Moreover, convolution operation of Canny takes 94-95% time of total execution time with thin and smooth but redundant edges. This also makes more robust of Sobel to detect moving cars edges.

  2. Execution time supports for adaptive scientific algorithms on distributed memory machines

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  3. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  4. Executive Mind, Timely Action.

    ERIC Educational Resources Information Center

    Torbert, William R.

    1983-01-01

    The idea of "Executive Mind" carries with it the notion of purposeful and effective action. Part I of this paper characterizes three complements to "Executive Mind"--"Observing Mind,""Theorizing Mind," and "Passionate Mind"--and offers historical figures exemplifying all four types. The concluding…

  5. Resource Selection Using Execution and Queue Wait Time Predictions

    NASA Technical Reports Server (NTRS)

    Warren, Smith; Wong, Parkson; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Computational grids provide users with many possible places to execute their applications. We wish to help users select where to run their applications by providing predictions of the execution times of applications on space shared parallel computers and predictions of when scheduling systems for such parallel computers will start applications. Our predictions are based on instance based learning techniques and simulations of scheduling algorithms. We find that our execution time prediction techniques have an average error of 37 percent of the execution times for trace data recorded from SGI Origins at NASA Ames Research Center and that this error is 67 percent lower than the error of user estimates. We also find that the error when predicting how long applications will wait in scheduling queues is 95 percent of mean queue wait times when using our execution time predictions and this is 57 percent lower than if we use user execution time estimates.

  6. Attitude-Control Algorithm for Minimizing Maneuver Execution Errors

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet

    2008-01-01

    A G-RAC attitude-control algorithm is used to minimize maneuver execution error in a spacecraft with a flexible appendage when said spacecraft must induce translational momentum by firing (in open loop) large thrusters along a desired direction for a given period of time. The controller is dynamic with two integrators and requires measurement of only the angular position and velocity of the spacecraft. The global stability of the closed-loop system is guaranteed without having access to the states describing the dynamics of the appendage and with severe saturation in the available torque. Spacecraft apply open-loop thruster firings to induce a desired translational momentum with an extended appendage. This control algorithm will assist this maneuver by stabilizing the attitude dynamics around a desired orientation, and consequently minimize the maneuver execution errors.

  7. Execution time support for scientific programs on distributed memory machines

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  8. Motor and Executive Control in Repetitive Timing of Brief Intervals

    ERIC Educational Resources Information Center

    Holm, Linus; Ullen, Fredrik; Madison, Guy

    2013-01-01

    We investigated the causal role of executive control functions in the production of brief time intervals by means of a concurrent task paradigm. To isolate the influence of executive functions on timing from motor coordination effects, we dissociated executive load from the number of effectors used in the dual task situation. In 3 experiments,…

  9. Accelerating the Gillespie Exact Stochastic Simulation Algorithm using hybrid parallel execution on graphics processing units.

    PubMed

    Komarov, Ivan; D'Souza, Roshan M

    2012-01-01

    The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.

  10. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  11. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  12. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  13. Modelling Limit Order Execution Times from Market Data

    NASA Astrophysics Data System (ADS)

    Kim, Adlar; Farmer, Doyne; Lo, Andrew

    2007-03-01

    Although the term ``liquidity'' is widely used in finance literatures, its meaning is very loosely defined and there is no quantitative measure for it. Generally, ``liquidity'' means an ability to quickly trade stocks without causing a significant impact on the stock price. From this definition, we identified two facets of liquidity -- 1.execution time of limit orders, and 2.price impact of market orders. The limit order is an order to transact a prespecified number of shares at a prespecified price, which will not cause an immediate execution. On the other hand, the market order is an order to transact a prespecified number of shares at a market price, which will cause an immediate execution, but are subject to price impact. Therefore, when the stock is liquid, market participants will experience quick limit order executions and small market order impacts. As a first step to understand market liquidity, we studied the facet of liquidity related to limit order executions -- execution times. In this talk, we propose a novel approach of modeling limit order execution times and show how they are affected by size and price of orders. We used q-Weibull distribution, which is a generalized form of Weibull distribution that can control the fatness of tail to model limit order execution times.

  14. The Productive Executive: Top Ten Time Tips.

    ERIC Educational Resources Information Center

    Starr, Linda

    1984-01-01

    Presents 10 time management strategies to help administrators save time. The strategies discussed include lists, deadlines, work delegation, skill development, teamwork, flexibility, avoiding interruptions and procrastination, converting waiting or traveling time into productive time, and use of forms. (MBR)

  15. Programming real-time executives in higher order language

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.

    1982-01-01

    Methods by which real-time executive programs can be implemented in a higher order language are discussed, using HAL/S and Path Pascal languages as program examples. Techniques are presented by which noncyclic tasks can readily be incorporated into the executive system. Situations are shown where the executive system can fail to meet its task scheduling and yet be able to recover either by rephasing the clock or stacking the information for later processing. The concept of deadline processing is shown to enable more effective mixing of time and information synchronized systems.

  16. Timing formulas for dissection algorithms on vector computers

    NASA Technical Reports Server (NTRS)

    Poole, W. G., Jr.

    1977-01-01

    The use of the finite element and finite difference methods often leads to the problem of solving large, sparse, positive definite systems of linear equations. MACSYMA plays a major role in the generation of formulas representing the time required for execution of the dissection algorithms. The use of MACSYMA in the generation of those formulas is described.

  17. The minimal time detection algorithm

    NASA Technical Reports Server (NTRS)

    Kim, Sungwan

    1995-01-01

    An aerospace vehicle may operate throughout a wide range of flight environmental conditions that affect its dynamic characteristics. Even when the control design incorporates a degree of robustness, system parameters may drift enough to cause its performance to degrade below an acceptable level. The object of this paper is to develop a change detection algorithm so that we can build a highly adaptive control system applicable to aircraft systems. The idea is to detect system changes with minimal time delay. The algorithm developed is called Minimal Time-Change Detection Algorithm (MT-CDA) which detects the instant of change as quickly as possible with false-alarm probability below a certain specified level. Simulation results for the aircraft lateral motion with a known or unknown change in control gain matrices, in the presence of doublet input, indicate that the algorithm works fairly well as theory indicates though there is a difficulty in deciding the exact amount of change in some situations. One of MT-CDA distinguishing properties is that detection delay of MT-CDA is superior to that of Whiteness Test.

  18. An algorithm to find critical execution paths of software based on complex network

    NASA Astrophysics Data System (ADS)

    Huang, Guoyan; Zhang, Bing; Ren, Rong; Ren, Jiadong

    2015-01-01

    The critical execution paths play an important role in software system in terms of reducing the numbers of test date, detecting the vulnerabilities of software structure and analyzing software reliability. However, there are no efficient methods to discover them so far. Thus in this paper, a complex network-based software algorithm is put forward to find critical execution paths (FCEP) in software execution network. First, by analyzing the number of sources and sinks in FCEP, software execution network is divided into AOE subgraphs, and meanwhile, a Software Execution Network Serialization (SENS) approach is designed to generate execution path set in each AOE subgraph, which not only reduces ring structure's influence on path generation, but also guarantees the nodes' integrity in network. Second, according to a novel path similarity metric, similarity matrix is created to calculate the similarity among sets of path sequences. Third, an efficient method is taken to cluster paths through similarity matrices, and the maximum-length path in each cluster is extracted as the critical execution path. At last, a set of critical execution paths is derived. The experimental results show that the FCEP algorithm is efficient in mining critical execution path under software complex network.

  19. Predicting Operator Execution Times Using CogTool

    NASA Technical Reports Server (NTRS)

    Santiago-Espada, Yamira; Latorella, Kara A.

    2013-01-01

    Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.

  20. Discrete Event Execution with One-Sided and Two-Sided GVT Algorithms on 216,000 Processor Cores

    SciTech Connect

    Perumalla, Kalyan S; Park, Alfred J; Tipparaju, Vinod

    2014-01-01

    Global virtual time (GVT) computation is a key determinant of the efficiency and runtime dynamics of parallel discrete event simulations (PDES), especially on large-scale parallel platforms. Here, three execution modes of a generalized GVT computation algorithm are studied on high-performance parallel computing systems: (1) a synchronous GVT algorithm that affords ease of implementation, (2) an asynchronous GVT algorithm that is more complex to implement but can relieve blocking latencies, and (3) a variant of the asynchronous GVT algorithm to exploit one-sided communication in extant supercomputing platforms. Performance results are presented of implementations of these algorithms on up to 216,000 cores of a Cray XT5 system, exercised on a range of parameters: optimistic and conservative synchronization, fine- to medium-grained event computation, synthetic and non-synthetic applications, and different lookahead values. Performance of up to 54 billion events executed per second is registered. Detailed PDES-specific runtime metrics are presented to further the understanding of tightly-coupled discrete event dynamics on massively parallel platforms.

  1. Real-Time Projection to Verify Plan Success During Execution

    NASA Technical Reports Server (NTRS)

    Wagner, David A.; Dvorak, Daniel L.; Rasmussen, Robert D.; Knight, Russell L.; Morris, John R.; Bennett, Matthew B.; Ingham, Michel D.

    2012-01-01

    The Mission Data System provides a framework for modeling complex systems in terms of system behaviors and goals that express intent. Complex activity plans can be represented as goal networks that express the coordination of goals on different state variables of the system. Real-time projection extends the ability of this system to verify plan achievability (all goals can be satisfied over the entire plan) into the execution domain so that the system is able to continuously re-verify a plan as it is executed, and as the states of the system change in response to goals and the environment. Previous versions were able to detect and respond to goal violations when they actually occur during execution. This new capability enables the prediction of future goal failures; specifically, goals that were previously found to be achievable but are no longer achievable due to unanticipated faults or environmental conditions. Early detection of such situations enables operators or an autonomous fault response capability to deal with the problem at a point that maximizes the available options. For example, this system has been applied to the problem of managing battery energy on a lunar rover as it is used to explore the Moon. Astronauts drive the rover to waypoints and conduct science observations according to a plan that is scheduled and verified to be achievable with the energy resources available. As the astronauts execute this plan, the system uses this new capability to continuously re-verify the plan as energy is consumed to ensure that the battery will never be depleted below safe levels across the entire plan.

  2. Overlap of movement planning and movement execution reduces reaction time.

    PubMed

    Orban de Xivry, Jean-Jacques; Legrain, Valéry; Lefèvre, Philippe

    2017-01-01

    Motor planning is the process of preparing the appropriate motor commands in order to achieve a goal. This process has largely been thought to occur before movement onset and traditionally has been associated with reaction time. However, in a virtual line bisection task we observed an overlap between movement planning and execution. In this task performed with a robotic manipulandum, we observed that participants (n = 30) made straight movements when the line was in front of them (near target) but often made curved movements when the same target was moved sideways (far target, which had the same orientation) in such a way that they crossed the line perpendicular to its orientation. Unexpectedly, movements to the far targets had shorter reaction times than movements to the near targets (mean difference: 32 ms, SE: 5 ms, max: 104 ms). In addition, the curvature of the movement modulated reaction time. A larger increase in movement curvature from the near to the far target was associated with a larger reduction in reaction time. These highly curved movements started with a transport phase during which accuracy demands were not taken into account. We conclude that an accuracy demand imposes a reaction time penalty if processed before movement onset. This penalty is reduced if the start of the movement consists of a transport phase and if the movement plan can be refined with respect to accuracy demands later in the movement, hence demonstrating an overlap between movement planning and execution.

  3. Execution environment for intelligent real-time control systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, Janos

    1987-01-01

    Modern telerobot control technology requires the integration of symbolic and non-symbolic programming techniques, different models of parallel computations, and various programming paradigms. The Multigraph Architecture, which has been developed for the implementation of intelligent real-time control systems is described. The layered architecture includes specific computational models, integrated execution environment and various high-level tools. A special feature of the architecture is the tight coupling between the symbolic and non-symbolic computations. It supports not only a data interface, but also the integration of the control structures in a parallel computing environment.

  4. Modeling and Executing Electronic Health Records Driven Phenotyping Algorithms using the NQF Quality Data Model and JBoss® Drools Engine

    PubMed Central

    Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M.; Chute, Christopher G.; Pathak, Jyotishman

    2012-01-01

    With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation’s Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system. PMID:23304325

  5. Modeling and executing electronic health records driven phenotyping algorithms using the NQF Quality Data Model and JBoss® Drools Engine.

    PubMed

    Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M; Chute, Christopher G; Pathak, Jyotishman

    2012-01-01

    With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation's Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system.

  6. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  7. Algorithms for international atomic time.

    PubMed

    Panfilo, Gianna; Arias, E Felicitas

    2010-01-01

    This article reviews the creation and technical evolution of atomic time scales. In particular, we focus our attention on the method of calculation and the characteristics of International Atomic Time (TAI), and show how it is disseminated at the ultimate level of precision.

  8. Executive Control of Actions Across Time and Space

    PubMed Central

    Verbruggen, Frederick

    2016-01-01

    Many popular psychological accounts attribute adaptive human behavior to an “executive-control” system that regulates a lower-level “impulsive” or “associative” system. However, recent findings argue against this strictly hierarchical view. Instead, executive control of impulsive and inappropriate actions depends on an interplay between multiple basic cognitive processes. The outcome of these processes can be biased in advance. Executive-action control is also strongly influenced by personal experiences in the recent and distant past. Thus, executive control emerges from an interactive and competitive network. Main challenges for future research are to describe and understand these interactions and to put executive-action control in a wider sociocultural and evolutional context. PMID:28018053

  9. Conversion-Integration of MSFC Nonlinear Signal Diagnostic Analysis Algorithms for Realtime Execution of MSFC's MPP Prototype System

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1996-01-01

    NASA's advanced propulsion system Small Scale Magnetic Disturbances/Advanced Technology Development (SSME/ATD) has been undergoing extensive flight certification and developmental testing, which involves large numbers of health monitoring measurements. To enhance engine safety and reliability, detailed analysis and evaluation of the measurement signals are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce the risk of catastrophic system failures and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. During the development of SSME, ASRI participated in the research and development of several advanced non- linear signal diagnostic methods for health monitoring and failure prediction in turbomachinery components. However, due to the intensive computational requirement associated with such advanced analysis tasks, current SSME dynamic data analysis and diagnostic evaluation is performed off-line following flight or ground test with a typical diagnostic turnaround time of one to two days. The objective of MSFC's MPP Prototype System is to eliminate such 'diagnostic lag time' by achieving signal processing and analysis in real-time. Such an on-line diagnostic system can provide sufficient lead time to initiate corrective action and also to enable efficient scheduling of inspection, maintenance and repair activities. The major objective of this project was to convert and implement a number of advanced nonlinear diagnostic DSP algorithms in a format consistent with that required for integration into the Vanderbilt Multigraph Architecture (MGA) Model Based Programming environment. This effort will allow the real-time execution of these algorithms using the MSFC MPP Prototype System. ASRI has completed the software conversion and integration of a sequence of nonlinear signal analysis techniques specified in the SOW for real-time

  10. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    PubMed Central

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-01-01

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753

  11. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.

    PubMed

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-08-30

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.

  12. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform

    PubMed Central

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A.; Kiefer, Richard; Rasmussen, Luke V.; Pathak, Jyotishman; Denny, Joshua C.; Thompson, William K.

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  13. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  14. Integrated Planning: Consolidating Annual Facility Planning - More Time for Execution

    SciTech Connect

    Nelson, J. G.; R., L. Morton; Ramirez, C.; Morris, P. S.; McSwain, J. T.

    2011-02-02

    Previously, annual planning for Readiness in Technical Base and Facilities (RTBF) at the Nevada National Security Site (NNSS) was fragmented, disconnected, circular, and occurred constantly throughout the fiscal year (FY) comprising 9 of the 12 months, reducing the focus on implementation and execution. This required constant “looking back” instead of “looking forward.” In FY 2009, annual planning was consolidated into one comprehensive integrated plan (IP) for each facility/project, which comprised annual task planning/outyear budgeting, AMPs, and investment planning (i.e., TYIP). In FY 2010, the Risk Management Plans were added to the IPs. The integrated planning process achieved the following: 1) Eliminated fragmented, circular, planning and moved the plan to be more forward-looking; 2) Achieved a 90% reduction in schedule planning timeframe from 40 weeks (9 months) to 6 weeks; 3) Achieved an 80% reduction in cost from just under $1.0M to just over $200K, for a cost savings of nearly $800K (reduced combined effort from over 200 person-weeks to less than 40); 4) Reduced the number of plans generated from 21 plans (1 per facility per plan) per year to 8 plans per year (1 per facility plus 1 program-level IP); 5) Eliminated redundancy in common content between plans and improved consistency and overall quality; 6) Reduced the preparation time and cost of the FY 2010 SEP by 50% due to information provided in the IP; 7) Met the requirements for annual task planning, annual maintenance planning, ten-year investment planning, and risk management plans.

  15. Enhancing real-time flight simulation execution by intercepting Run-Time Library calls

    NASA Technical Reports Server (NTRS)

    Reinbachs, Namejs

    1993-01-01

    Standard operating system input-output (I/O) procedures impose a large time penalty on real-time program execution. These procedures are generally invoked by way of Run-Time Library (RTL) calls. To reduce the time penalty, as well as add flexibility, a technique has been developed to dynamically intercept these calls. The design and implementation of this technique, as applied to FORTRAN WRITE statements, are described. Measured performance gains using this RTL intercept technique are on the order of 1000 percent.

  16. A simple executive for a fault-tolerant, real-time multiprocessor.

    NASA Technical Reports Server (NTRS)

    Filene, R. J.; Green, A. I.

    1971-01-01

    Description of a simple executive for operation with a fault-tolerant multiprocessor that is oriented toward application in an environment where the primary function is to provide real-time control. The primary executive function is to accept requests for jobs placed by other jobs or from peripheral equipment and then schedule their initiation in accordance with the request parameters. The executive is also brought into action when a processor fails, so that appropriate disposition may be made of the job that was running on the failed processor. Many architectural features intended to support this executive concept are included.

  17. Special Issue on Time Scale Algorithms

    DTIC Science & Technology

    2008-01-01

    unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 IOP PUBLISHING METROLOGIA Metrologia 45 (2008) doi:10.1088/0026-1394/45/6/E01...special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the...Paris at the BIPM in 2002 (see Metrologia 40 (3), 2003) • 5th Symposium: in San Fernando, Spain at the ROA in 2008. The early symposia were concerned

  18. Device and algorithms for camera timing evaluation

    NASA Astrophysics Data System (ADS)

    Masson, Lucie; Cao, Frédéric; Viard, Clément; Guichard, Frédéric

    2014-01-01

    This paper presents a novel device and algorithms for measuring the different timings of digital cameras shooting both still images and videos. These timings include exposure (or shutter) time, electronic rolling shutter (ERS), frame rate, vertical blanking, time lags, missing frames, and duplicated frames. The device, the DxO LED Universal Timer (or "timer"), is designed to allow remotely-controlled automated timing measurements using five synchronized lines of one hundred LEDs each to provide accurate results; each line can be independently controlled if needed. The device meets the requirements of ISO 15781[1]. Camera timings are measured by automatically counting the number of lit LEDs on each line in still and video images of the device and finding the positions of the LEDs within a single frame or between different frames. Measurement algorithms are completely automated: positional markers on the device facilitate automatic detection of the timer as well as the positions of lit LEDs in the images. No manual computation or positioning is required. We used this system to measure the timings of several smartphones under different lighting and setting parameters.

  19. Response-Time Variability Is Related to Parent Ratings of Inattention, Hyperactivity, and Executive Function

    ERIC Educational Resources Information Center

    Gomez-Guerrero, Lorena; Martin, Cristina Dominguez; Mairena, Maria Angeles; Di Martino, Adriana; Wang, Jing; Mendelsohn, Alan L.; Dreyer, Benard P.; Isquith, Peter K.; Gioia, Gerard; Petkova, Eva; Castellanos, F. Xavier

    2011-01-01

    Objective: Individuals with ADHD are often characterized as inconsistent across many contexts. ADHD is also associated with deficits in executive function. We examined the relationships between response time (RT) variability on five brief computer tasks to parents' ratings of ADHD-related features and executive function in a group of children with…

  20. EDITORIAL: Special issue on time scale algorithms

    NASA Astrophysics Data System (ADS)

    Matsakis, Demetrios; Tavella, Patrizia

    2008-12-01

    This special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the tutorials presented on the first day. The symposium was attended by 76 persons, from every continent except Antarctica, by students as well as senior scientists, and hosted by the Real Instituto y Observatorio de la Armada (ROA) in San Fernando, Spain, whose staff further enhanced their nation's high reputation for hospitality. Although a timescale can be simply defined as a weighted average of clocks, whose purpose is to measure time better than any individual clock, timescale theory has long been and continues to be a vibrant field of research that has both followed and helped to create advances in the art of timekeeping. There is no perfect timescale algorithm, because every one embodies a compromise involving user needs. Some users wish to generate a constant frequency, perhaps not necessarily one that is well-defined with respect to the definition of a second. Other users might want a clock which is as close to UTC or a particular reference clock as possible, or perhaps wish to minimize the maximum variation from that standard. In contrast to the steered timescales that would be required by those users, other users may need free-running timescales, which are independent of external information. While no algorithm can meet all these needs, every algorithm can benefit from some form of tuning. The optimal tuning, and even the optimal algorithm, can depend on the noise characteristics of the frequency standards, or of their comparison systems, the most precise and accurate of which are currently Two Way Satellite Time and Frequency Transfer (TWSTFT) and GPS carrier phase time transfer. The interest in time scale algorithms and its associated statistical methodology began around 40 years ago when the Allan variance appeared and when the metrological institutions started realizing ensemble atomic time using more than

  1. Effective Time Management in the Project Office. Executive Summary.

    DTIC Science & Technology

    1976-11-05

    manager. It can be concluded from the study that project managers have difficulty in managing their time and that although neglected, time management should...be taught to project managers so as to preclude spending an inordinate amount of time to accomplish their job. An understanding and use of the time ... management principles delineated in the study should allow the manager to make much more effective use of his available time. (Author)

  2. Less-structured time in children's daily lives predicts self-directed executive functioning.

    PubMed

    Barker, Jane E; Semenov, Andrei D; Michaelson, Laura; Provan, Lindsay S; Snyder, Hannah R; Munakata, Yuko

    2014-01-01

    Executive functions (EFs) in childhood predict important life outcomes. Thus, there is great interest in attempts to improve EFs early in life. Many interventions are led by trained adults, including structured training activities in the lab, and less-structured activities implemented in schools. Such programs have yielded gains in children's externally-driven executive functioning, where they are instructed on what goal-directed actions to carry out and when. However, it is less clear how children's experiences relate to their development of self-directed executive functioning, where they must determine on their own what goal-directed actions to carry out and when. We hypothesized that time spent in less-structured activities would give children opportunities to practice self-directed executive functioning, and lead to benefits. To investigate this possibility, we collected information from parents about their 6-7 year-old children's daily, annual, and typical schedules. We categorized children's activities as "structured" or "less-structured" based on categorization schemes from prior studies on child leisure time use. We assessed children's self-directed executive functioning using a well-established verbal fluency task, in which children generate members of a category and can decide on their own when to switch from one subcategory to another. The more time that children spent in less-structured activities, the better their self-directed executive functioning. The opposite was true of structured activities, which predicted poorer self-directed executive functioning. These relationships were robust (holding across increasingly strict classifications of structured and less-structured time) and specific (time use did not predict externally-driven executive functioning). We discuss implications, caveats, and ways in which potential interpretations can be distinguished in future work, to advance an understanding of this fundamental aspect of growing up.

  3. Less-structured time in children's daily lives predicts self-directed executive functioning

    PubMed Central

    Barker, Jane E.; Semenov, Andrei D.; Michaelson, Laura; Provan, Lindsay S.; Snyder, Hannah R.; Munakata, Yuko

    2014-01-01

    Executive functions (EFs) in childhood predict important life outcomes. Thus, there is great interest in attempts to improve EFs early in life. Many interventions are led by trained adults, including structured training activities in the lab, and less-structured activities implemented in schools. Such programs have yielded gains in children's externally-driven executive functioning, where they are instructed on what goal-directed actions to carry out and when. However, it is less clear how children's experiences relate to their development of self-directed executive functioning, where they must determine on their own what goal-directed actions to carry out and when. We hypothesized that time spent in less-structured activities would give children opportunities to practice self-directed executive functioning, and lead to benefits. To investigate this possibility, we collected information from parents about their 6–7 year-old children's daily, annual, and typical schedules. We categorized children's activities as “structured” or “less-structured” based on categorization schemes from prior studies on child leisure time use. We assessed children's self-directed executive functioning using a well-established verbal fluency task, in which children generate members of a category and can decide on their own when to switch from one subcategory to another. The more time that children spent in less-structured activities, the better their self-directed executive functioning. The opposite was true of structured activities, which predicted poorer self-directed executive functioning. These relationships were robust (holding across increasingly strict classifications of structured and less-structured time) and specific (time use did not predict externally-driven executive functioning). We discuss implications, caveats, and ways in which potential interpretations can be distinguished in future work, to advance an understanding of this fundamental aspect of growing up

  4. Phase unwrapping algorithms for use in a true real-time optical body sensor system for use during radiotherapy.

    PubMed

    Parkhurst, James; Price, Gareth; Sharrock, Phil; Moore, Christopher

    2011-12-10

    An evaluation of the suitability of eight existing phase unwrapping algorithms to be used in a real-time optical body surface sensor based on Fourier fringe profilometry is presented. The algorithms are assessed on both the robustness of the results they give and their speed of execution. The algorithms are evaluated using four sets of real human body surface data, each containing five-hundred frames, obtained from patients undergoing radiotherapy, where fringe discontinuity is significant. We also present modifications to an existing algorithm, noncontinuous quality-guided path algorithm (NCQUAL), in order to decrease its execution time by a factor of 4 to make it suitable for use in a real-time system. The results obtained from the modified algorithm are compared with those of the existing algorithms. Three suitable algorithms were identified: two-stage noncontinuous quality-guided path algorithm (TSNCQUAL)-the modified algorithm presented here-for online processing and Flynn's minimum discontinuity algorithm (FLYNN) and preconditioned conjugate gradient method (PCG) algorithms for enhanced accuracy in off-line processing.

  5. Time Perception, Phonological Skills and Executive Function in Children with Dyslexia and/or ADHD Symptoms

    ERIC Educational Resources Information Center

    Gooch, Debbie; Snowling, Margaret; Hulme, Charles

    2011-01-01

    Background: Deficits in time perception (the ability to judge the duration of time intervals) have been found in children with both attention-deficit/hyperactivity disorder (ADHD) and dyslexia. This paper investigates time perception, phonological skills and executive functions in children with dyslexia and/or ADHD symptoms (AS). Method: Children…

  6. Why are they late? Timing abilities and executive control among students with learning disabilities.

    PubMed

    Grinblat, Nufar; Rosenblum, Sara

    2016-12-01

    While a deficient ability to perform daily tasks on time has been reported among students with learning disabilities (LD), the underlying mechanism behind their 'being late' is still unclear. This study aimed to evaluate the organization in time, time estimation abilities, actual performance time pertaining to specific daily activities, as well as the executive functions of students with LD in comparison to those of controls, and to assess the relationships between these domains among each group. The participants were 27 students with LD, aged 20-30, and 32 gender and age-matched controls who completed the Time Organization and Participation Scale (TOPS) and the Behavioral Rating Inventory of Executive Function-Adult version (BRIEF-A). In addition, their ability to estimate the time needed to complete the task of preparing a cup of coffee as well as their actual performance time were evaluated. The results indicated that in comparison to controls, students with LD showed significantly inferior organization in time (TOPS) and executive function abilities (BRIEF-A). Furthermore, their time estimation abilities were significantly inferior and they required significantly more time to prepare a cup of coffee. Regression analysis identified the variables that predicted organization in time and task performance time among each group. The significance of the results for both theoretical and clinical implications are discussed. What this paper adds? This study examines the underlying mechanism of the phenomena of being late among students with LD. Following a recent call for using ecologically valid assessments, the functional daily ability of students with LD to prepare a cup of coffee and to organize time were investigated. Furthermore, their time estimation and executive control abilities were examined as a possible underlying mechanism for their lateness. Although previous studies have indicated executive control deficits among students with LD, to our knowledge, this

  7. Processing Time Shifts Affects the Execution of Motor Responses

    ERIC Educational Resources Information Center

    Sell, Andrea J.; Kaschak, Michael P.

    2011-01-01

    We explore whether time shifts in text comprehension are represented spatially. Participants read sentences involving past or future events and made sensibility judgment responses in one of two ways: (1) moving toward or away from their body and (2) pressing the toward or away buttons without moving. Previous work suggests that spatial…

  8. Discovering of execution patterns of subprograms in execution traces

    NASA Astrophysics Data System (ADS)

    Komorowski, Michał

    2015-09-01

    This article describes an approach to the analysis of historical debuggers logs (execution traces). Historical debuggers are tools that provide insight into the history of programs execution. The author focuses on finding execution patterns of subprograms in these logs in an efficient way and proposes a method of visualising them. Execution patterns are a form of automatically generated specification/documentation of software which show usage of subprograms. In order to discover them in execution traces an existing algorithm was adapted. This algorithm is based on suffix arrays and it finds patterns in text application logs in the linear time with the respect to the length of logs. Additionally, Extended Call Graphs were introduced to visualise the execution patterns. They contain more information in comparison with standard call graphs.

  9. Two criteria for the selection of assembly plans - Maximizing the flexibility of sequencing the assembly tasks and minimizing the assembly time through parallel execution of assembly tasks

    NASA Technical Reports Server (NTRS)

    Homem De Mello, Luiz S.; Sanderson, Arthur C.

    1991-01-01

    The authors introduce two criteria for the evaluation and selection of assembly plans. The first criterion is to maximize the number of different sequences in which the assembly tasks can be executed. The second criterion is to minimize the total assembly time through simultaneous execution of assembly tasks. An algorithm that performs a heuristic search for the best assembly plan over the AND/OR graph representation of assembly plans is discussed. Admissible heuristics for each of the two criteria introduced are presented. Some implementation issues that affect the computational efficiency are addressed.

  10. Algorithms for Brownian first-passage-time estimation.

    PubMed

    Adib, Artur B

    2009-09-01

    A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.

  11. GPU-accelerated phase extraction algorithm for interferograms: a real-time application

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoqiang; Wu, Yongqian; Liu, Fengwei

    2016-11-01

    Optical testing, having the merits of non-destruction and high sensitivity, provides a vital guideline for optical manufacturing. But the testing process is often computationally intensive and expensive, usually up to a few seconds, which is sufferable for dynamic testing. In this paper, a GPU-accelerated phase extraction algorithm is proposed, which is based on the advanced iterative algorithm. The accelerated algorithm can extract the right phase-distribution from thirteen 1024x1024 fringe patterns with arbitrary phase shifts in 233 milliseconds on average using NVIDIA Quadro 4000 graphic card, which achieved a 12.7x speedup ratio than the same algorithm executed on CPU and 6.6x speedup ratio than that on Matlab using DWANING W5801 workstation. The performance improvement can fulfill the demand of computational accuracy and real-time application.

  12. The Time Course Effect of Moderate Intensity Exercise on Response Execution and Response Inhibition

    ERIC Educational Resources Information Center

    Joyce, Jennifer; Graydon, Jan; McMorris, Terry; Davranche, Karen

    2009-01-01

    This research aimed to investigate the time course effect of a moderate steady-state exercise session on response execution and response inhibition using a stop-task paradigm. Ten participants performed a stop-signal task whilst cycling at a carefully controlled workload intensity (40% of maximal aerobic power), immediately following exercise and…

  13. Extensions to Real-time Hierarchical Mine Detection Algorithm

    DTIC Science & Technology

    2002-09-01

    Extensions to Real-Time Hierarchical Mine Detection Algorithm System Number: Patron Number: Requester: Notes: DSIS Use only: Deliver to: DK...Recherche et developpement pour Ia defense Canada Extensions to Real-Time Hierarchical Mine Detection Algorithm Final Report Sinh Duong and Mabo R. Ito...EXTENSIONS TO REAL-TIME HIERARCHICAL MINE DETECTION ALGORITHM FINAL REPORT by Smh Duong and Mabo R Ito The Univer~ity of Bntl~h Columbia Vancouver

  14. Two linear time, low overhead algorithms for graph layout

    SciTech Connect

    Wylie, Brian; Baumes, Jeff

    2008-01-10

    The software comprises two algorithms designed to perform a 2D layout of a graph structure in time linear with respect to the vertices and edges in the graph, whereas most other layout algorithms have a running time that is quadratic with respect to the number of vertices or greater. Although these layout algorithms run in a fraction of the time as their competitors, they provide competitive results when applied to most real-world graphs. These algorithms also have a low constant running time and small memory footprint, making them useful for small to large graphs.

  15. Algorithms for Real-Time Processing

    DTIC Science & Technology

    2003-04-01

    algorithm has been mapped on an application specific prototyping platform which contains four VLSI CORDIC ASICs and some FPGAs (Field Programmable Gate... makes less critical the implementation of a VLSI based systolic array. A practical application of systolic processing for classical ground based or ship...interferometry (ATI) - SAR to detect moving targets [ 18]. It can be shown that this approach offers a considerable computational advantage; FPGA technology has

  16. Executive Functions, Time Organization and Quality of Life among Adults with Learning Disabilities

    PubMed Central

    Sharfi, Kineret; Rosenblum, Sara

    2016-01-01

    Purpose This study compared the executive functions, organization in time and perceived quality of life (QoL) of 55 adults with learning disabilities (LD) with those of 55 matched controls (mean age 30 years). Furthermore, relationships and predictive relationships between these variables among the group with LD were examined. Methods All participants completed the Behavioral Rating Inventory of Executive Functions (BRIEF-A), the Time Organization and Participation (TOPS, A-C) and the World Health Organization Quality of Life (WHOQOL) questionnaires. Chi-square tests, independent t-tests and MANOVA were used to examine group differences in each of the subscales scores and ratings of each instrument. Pearson correlations and regression predictive models were used to examine the relationships between the variables in the group with LD. Results Adults with LD had significantly poorer executive functions (BRIEF-A), deficient organization in time abilities (TOPS A-B), accompanied with negative emotional response (TOPS- C), and lower perceived QoL (physical, psychological, social and environmental) in comparison to adults without LD. Regression analysis revealed that Initiation (BRIEF-A) significantly predicted approximately 15% of the participants' organization in time abilities (TOPS A, B scores) beyond group membership. Furthermore, initiation, emotional control (BRIEF-A subscales) and emotional responses following unsuccessful organization of time (TOPS-C) together accounted for 39% of the variance of psychological QoL beyond the contribution of group membership. Conclusions Deficits in initiation and emotional executive functions as well as organization in time abilities and emotional responses to impairments in organizing time affect the QoL of adults with LD and thus should be considered in further research as well as in clinical applications. PMID:27959913

  17. Implementation of and Ada real-time executive: A case study

    NASA Technical Reports Server (NTRS)

    Laird, James D.; Burton, Bruce A.; Koppes, Mary R.

    1986-01-01

    Current Ada language implementations and runtime environments are immature, unproven and are a key risk area for real-time embedded computer system (ECS). A test-case environment is provided in which the concerns of the real-time, ECS community are addressed. A priority driven executive is selected to be implemented in the Ada programming language. The model selected is representative of real-time executives tailored for embedded systems used missile, spacecraft, and avionics applications. An Ada-based design methodology is utilized, and two designs are considered. The first of these designs requires the use of vendor supplied runtime and tasking support. An alternative high-level design is also considered for an implementation requiring no vendor supplied runtime or tasking support. The former approach is carried through to implementation.

  18. A fast and Robust Algorithm for general inequality/equality constrained minimum time problems

    SciTech Connect

    Briessen, B.; Sadegh, N.

    1995-12-01

    This paper presents a new algorithm for solving general inequality/equality constrained minimum time problems. The algorithm`s solution time is linear in the number of Runge-Kutta steps and the number of parameters used to discretize the control input history. The method is being applied to a three link redundant robotic arm with torque bounds, joint angle bounds, and a specified tip path. It solves case after case within a graphical user interface in which the user chooses the initial joint angles and the tip path with a mouse. Solve times are from 30 to 120 seconds on a Hewlett Packard workstation. A zero torque history is always used in the initial guess, and the algorithm has never crashed, indicating its robustness. The algorithm solves for a feasible solution for large trajectory execution time t{sub f} and then reduces t{sub f} and then reduces t{sub f} by a small amount and re-solves. The fixed time re- solve uses a new method of finding a near-minimum-2-norm solution to a set of linear equations and inequalities that achieves quadratic convegence to a feasible solution of the full nonlinear problem.

  19. Supporting Real-Time Operations and Execution through Timeline and Scheduling Aids

    NASA Technical Reports Server (NTRS)

    Marquez, Jessica J.; Pyrzak, Guy; Hashemi, Sam; Ahmed, Samia; McMillin, Kevin Edward; Medwid, Joseph Daniel; Chen, Diana; Hurtle, Esten

    2013-01-01

    Since 2003, the NASA Ames Research Center has been actively involved in researching and advancing the state-of-the-art of planning and scheduling tools for NASA mission operations. Our planning toolkit SPIFe (Scheduling and Planning Interface for Exploration) has supported a variety of missions and field tests, scheduling activities for Mars rovers as well as crew on-board International Space Station and NASA earth analogs. The scheduled plan is the integration of all the activities for the day/s. In turn, the agents (rovers, landers, spaceships, crew) execute from this schedule while the mission support team members (e.g., flight controllers) follow the schedule during execution. Over the last couple of years, our team has begun to research and validate methods that will better support users during realtime operations and execution of scheduled activities. Our team utilizes human-computer interaction principles to research user needs, identify workflow processes, prototype software aids, and user test these. This paper discusses three specific prototypes developed and user tested to support real-time operations: Score Mobile, Playbook, and Mobile Assistant for Task Execution (MATE).

  20. Simulating the time-dependent Schr"odinger equation with a quantum lattice-gas algorithm

    NASA Astrophysics Data System (ADS)

    Prezkuta, Zachary; Coffey, Mark

    2007-03-01

    Quantum computing algorithms promise remarkable improvements in speed or memory for certain applications. Currently, the Type II (or hybrid) quantum computer is the most feasible to build. This consists of a large number of small Type I (pure) quantum computers that compute with quantum logic, but communicate with nearest neighbors in a classical way. The arrangement thus formed is suitable for computations that execute a quantum lattice gas algorithm (QLGA). We report QLGA simulations for both the linear and nonlinear time-dependent Schr"odinger equation. These evidence the stable, efficient, and at least second order convergent properties of the algorithm. The simulation capability provides a computational tool for applications in nonlinear optics, superconducting and superfluid materials, Bose-Einstein condensates, and elsewhere.

  1. Fluid Intelligence as a Mediator of the Relationship between Executive Control and Balanced Time Perspective

    PubMed Central

    Zajenkowski, Marcin; Stolarski, Maciej; Witowska, Joanna; Maciantowicz, Oliwia; Łowicki, Paweł

    2016-01-01

    This study examined the cognitive foundations of the balanced time perspective (BTP) proposed by Zimbardo and Boyd (1999). Although BTP is defined as the mental ability to switch effectively between different temporal perspectives, its connection with cognitive functioning has not yet been established. We addressed this by exploring the relationships between time perspectives and both fluid intelligence (measured with Raven’s and Cattell’s tests) and executive control (Go/No-go and anti-saccade tasks). An investigation conducted among Polish adults (N = 233) revealed that more balanced TP profile was associated with higher fluid intelligence, and higher executive control. Moreover, we found that the relationship between executive control and BTP was completely mediated by fluid intelligence with the effect size (the ratio of the indirect effect to the total effect) of 0.75, which suggests that cognitive abilities play an important role in adoption of temporal balance. The findings have relevance to time perspective theory as they provide valuable insight into the mechanisms involved in assigning human experience to certain time frames. PMID:27920750

  2. Fluid Intelligence as a Mediator of the Relationship between Executive Control and Balanced Time Perspective.

    PubMed

    Zajenkowski, Marcin; Stolarski, Maciej; Witowska, Joanna; Maciantowicz, Oliwia; Łowicki, Paweł

    2016-01-01

    This study examined the cognitive foundations of the balanced time perspective (BTP) proposed by Zimbardo and Boyd (1999). Although BTP is defined as the mental ability to switch effectively between different temporal perspectives, its connection with cognitive functioning has not yet been established. We addressed this by exploring the relationships between time perspectives and both fluid intelligence (measured with Raven's and Cattell's tests) and executive control (Go/No-go and anti-saccade tasks). An investigation conducted among Polish adults (N = 233) revealed that more balanced TP profile was associated with higher fluid intelligence, and higher executive control. Moreover, we found that the relationship between executive control and BTP was completely mediated by fluid intelligence with the effect size (the ratio of the indirect effect to the total effect) of 0.75, which suggests that cognitive abilities play an important role in adoption of temporal balance. The findings have relevance to time perspective theory as they provide valuable insight into the mechanisms involved in assigning human experience to certain time frames.

  3. A new real-time tsunami detection algorithm

    NASA Astrophysics Data System (ADS)

    Chierici, Francesco; Embriaco, Davide; Pignagnoli, Luca

    2017-01-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection based on the real-time tide removal and real-time band-pass filtering of seabed pressure recordings. The algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. Pressure data sets acquired by Bottom Pressure Recorders in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event which occurred at Haida Gwaii on 28 October 2012 using data recorded by the Bullseye underwater node of Ocean Networks Canada. The algorithm successfully ran for test purpose in year-long missions onboard abyssal observatories, deployed in the Gulf of Cadiz and in the Western Ionian Sea.

  4. Separating essentials from incidentals: an execution architecture for real-time control systems

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel; Reinholtz, Kirk

    2004-01-01

    This paper describes an execution architecture that makes such systems far more analyzable and verifiable by aggressive separation of concerns. The architecture separates two key software concerns: transformations of global state, as defined in pure functions; and sequencing/timing of transformations, as performed by an engine that enforces four prime invariants. The important advantage of this architecture, besides facilitating verification, is that it encourages formal specification of systems in a vocabulary that brings systems engineering closer to software engineering.

  5. Influence of the distance in a roundhouse kick's execution time and impact force in Taekwondo.

    PubMed

    Falco, Coral; Alvarez, Octavio; Castillo, Isabel; Estevan, Isaac; Martos, Julio; Mugarra, Fernando; Iradi, Antonio

    2009-02-09

    Taekwondo, originally a Korean martial art, is well known for its kicks. One of the most frequently used kicks in competition is Bandal Chagui or roundhouse kick. Excellence in Taekwondo relies on the ability to make contact with the opponent's trunk or face with enough force in as little time as possible, while at the same time avoiding being hit. Thus, the distance between contestants is an important variable to be taken into consideration. Thirty-one Taekwondo athletes in two different groups (expert and novice, according to experience in competition) took part in this study. The purpose of this study was to examine both impact force and execution time in a Bandal Chagui or roundhouse kick, and to explore the effect of execution distance in these two variables. A new model was developed in order to measure the force exerted by the body on a load. A force platform and a contact platform were used to measure these variables. The results showed that there are no significant differences in terms of impact force in relation to execution distance in expert competitors. Significant and positive correlations between body mass and impact force (p<.01) seem to mean that novice competitors use their body mass to generate high impact forces. Significant differences were found in competitive experience and execution time for the three different distances of kicking considered in the study. Standing at a certain further distance from the opponent should be an advantage for competitors who are used to kick from a further distance in their training.

  6. Effects of sleep inertia after daytime naps vary with executive load and time of day.

    PubMed

    Groeger, John A; Lo, June C Y; Burns, Christopher G; Dijk, Derk-Jan

    2011-04-01

    The effects of executive load on working memory performance during sleep inertia after morning or afternoon naps were assessed using a mixed design with nap/wake as a between-subjects factor and morning/afternoon condition as a within-subject factor. Thirty-two healthy adults (mean 22.5 ± 3.0 years) attended two laboratory sessions after a night of restricted sleep (6 hrs), and at first visit, were randomly assigned to the Nap or Wake group. Working memory (n-back) and subjective workload were assessed approximately 5 and 25 minutes after 90-minute morning and afternoon nap opportunities and at the corresponding times in the Wake condition. Actigraphically assessed nocturnal sleep duration, subjective sleepiness, and psychomotor vigilance performance before daytime assessments did not vary across conditions. Afternoon naps showed shorter EEG assessed sleep latencies, longer sleep duration, and more Slow Wave Sleep than morning naps. Working memory performance deteriorated, and subjective mental workload increased at higher executive loadings. After afternoon naps, participants performed less well on more executive-function intensive working memory tasks (i.e., 3-back), but waking and napping participants performed equally well on simpler tasks. After some 30 minutes of cognitive activity, there were no longer performance differences between the waking and napping groups. Subjective Task Difficulty and Mental Effort requirements were less affected by sleep inertia and dissociated from objective measures when participants had napped in the afternoon. We conclude that executive functions take longer to return to asymptotic performance after sleep than does performance of simpler tasks which are less reliant on executive functions.

  7. Time-based prospective memory in young children-Exploring executive functions as a developmental mechanism.

    PubMed

    Kretschmer, Anett; Voigt, Babett; Friedrich, Sylva; Pfeiffer, Kathrin; Kliegel, Matthias

    2014-01-01

    The present study investigated time-based prospective memory (PM) during the transition from kindergarten/preschool to school age and applied mediation models to test the impact of executive functions (working memory, inhibitory control) and time monitoring on time-based PM development. Twenty-five preschool (age: M = 5.75, SD = 0.28) and 22 primary school children (age: M = 7.83, SD = 0.39) participated. To examine time-based PM, children had to play a computer-based driving game requiring them to drive a car on a road without hitting others cars (ongoing task) and to refill the car regularly according to a fuel gauge, which serves as clock equivalent (PM task). The level of gas that was still left in the fuel gauge was not displayed on the screen and children had to monitor it via a button press (time monitoring). Results revealed a developmental increase in time-based PM performance from preschool to school age. Applying the mediation models, only working memory was revealed to influence PM development. Neither inhibitory control alone nor the mediation paths leading from both executive functions to time monitoring could explain the link between age and time-based PM. Thus, results of the present study suggest that working memory may be one key cognitive process driving the developmental growth of time-based PM during the transition from preschool to school age.

  8. A Real-Time Rover Executive based On Model-Based Reactive Planning

    NASA Technical Reports Server (NTRS)

    Bias, M. Bernardine; Lemai, Solange; Muscettola, Nicola; Korsmeyer, David (Technical Monitor)

    2003-01-01

    This paper reports on the experimental verification of the ability of IDEA (Intelligent Distributed Execution Architecture) effectively operate at multiple levels of abstraction in an autonomous control system. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting control agents, each organized around the same fundamental structure. Two IDEA agents, a system-level agent and a mission-level agent, are designed and implemented to autonomously control the K9 rover in real-time. The system is evaluated in the scenario where the rover must acquire images from a specified set of locations. The IDEA agents are responsible for enabling the rover to achieve its goals while monitoring the execution and safety of the rover and recovering from dangerous states when necessary. Experiments carried out both in simulation and on the physical rover, produced highly promising results.

  9. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  10. IMPROVEMENTS TO THE TIME STEPPING ALGORITHM OF RELAP5-3D

    SciTech Connect

    Cumberland, R.; Mesina, G.

    2009-01-01

    The RELAP5-3D time step method is used to perform thermo-hydraulic and neutronic simulations of nuclear reactors and other devices. It discretizes time and space by numerically solving several differential equations. Previously, time step size was controlled by halving or doubling the size of a previous time step. This process caused the code to run slower than it potentially could. In this research project, the RELAP5-3D time step method was modifi ed to allow a new method of changing time steps to improve execution speed and to control error. The new RELAP5-3D time step method being studied involves making the time step proportional to the material courant limit (MCL), while insuring that the time step does not increase by more than a factor of two between advancements. As before, if a step fails or mass error is excessive, the time step is cut in half. To examine performance of the new method, a measure of run time and a measure of error were plotted against a changing MCL proportionality constant (m) in seven test cases. The removal of the upper time step limit produced a small increase in error, but a large decrease in execution time. The best value of m was found to be 0.9. The new algorithm is capable of producing a signifi cant increase in execution speed, with a relatively small increase in mass error. The improvements made are now under consideration for inclusion as a special option in the RELAP5-3D production code.

  11. Validation of Accelerometer Wear and Nonwear Time Classification Algorithm

    PubMed Central

    Choi, Leena; Liu, Zhouwen; Matthews, Charles E.; Buchowski, Maciej S.

    2011-01-01

    Introduction The use of movement monitors (accelerometers) for measuring physical activity (PA) in intervention and population-based studies is becoming a standard methodology for the objective measurement of sedentary and active behaviors and for validation of subjective PA self-reports. A vital step in PA measurements is classification of daily time into accelerometer wear and nonwear intervals using its recordings (counts) and an accelerometer-specific algorithm. Purpose To validate and improve a commonly used algorithm for classifying accelerometer wear and nonwear time intervals using objective movement data obtained in the whole-room indirect calorimeter. Methods We conducted a validation study of a wear/nonwear automatic algorithm using data obtained from 49 adults and 76 youth wearing accelerometers during a strictly monitored 24-h stay in a room calorimeter. The accelerometer wear and nonwear time classified by the algorithm was compared with actual wearing time. Potential improvements to the algorithm were examined using the minimum classification error as an optimization target. Results The recommended elements in the new algorithm are: 1) zero-count threshold during a nonwear time interval, 2) 90-min time window for consecutive zero/nonzero counts, and 3) allowance of 2-min interval of nonzero counts with the up/downstream 30-min consecutive zero counts window for detection of artifactual movements. Compared to the true wearing status, improvements to the algorithm decreased nonwear time misclassification during the waking and the 24-h periods (all P < 0.001). Conclusions The accelerometer wear/nonwear time algorithm improvements may lead to more accurate estimation of time spent in sedentary and active behaviors. PMID:20581716

  12. Chronotype and time-of-day influences on the alerting, orienting, and executive components of attention.

    PubMed

    Matchock, Robert L; Mordkoff, J Toby

    2009-01-01

    Recent research on attention has identified three separable components, known as alerting, orienting, and executive functioning, which are thought to be subserved by distinct neural networks. Despite systematic investigation into their relatedness to each other and to psychopathology, little is known about how these three networks might be modulated by such factors as time-of-day and chronotype. The present study administered the Attentional Network Test (ANT) and a self-report measure of alertness to 80 participants at 0800, 1200, 1600, and 2000 hours on the same day. Participants were also chronotyped with a morningness/eveningness questionnaire and divided into evening versus morning/neither-type groups; morning chronotypes tend to perform better early in the day, while evening chronotypes show enhanced performance later in the day. The results replicated the lack of any correlations between alerting, orienting, and executive functioning, supporting the independence of these three networks. There was an effect of time-of-day on executive functioning with higher conflict scores at 1200 and 1600 hours for both chronotypes. The efficiency of the orienting system did not change as a function of time-of-day or chronotype. The alerting measure, however, showed an interaction between time-of-day and chronotype such that alerting scores increased only for the morning/neither-type participants in the latter half of the day. There was also an interaction between time-of-day and chronotype for self-reported alertness, such that it increased during the first half of the day for all participants, but then decreased for morning/neither types (only) toward evening. This is the first report to examine changes in the trinity of attentional networks measured by the ANT throughout a normal day in a large group of normal participants, and it encourages more integration between chronobiology and cognitive neuroscience for both theoretical and practical reasons.

  13. Real–Time ECG Algorithms for Ambulatory Patient Monitoring

    PubMed Central

    Pino, Esteban; Ohno–Machado, Lucila; Wiechmann, Eduardo; Curtis, Dorothy

    2005-01-01

    Brigham & Women’s Hospital is designing a wireless monitoring system for patients in the waiting area of the Emergency Department. A real–time ECG algorithm is required to monitor and alert changes in patients that have not yet been admitted to the Emergency Room. For this purpose, three simple algorithms are compared in terms of processing time, beat detection accuracy and heart rate (HR) estimation. Varying amounts of noise were added to records from the MIT-BIH Arrhythmia Database [1] to mimic expected waiting room conditions. Some recommendations regarding selection of an algorithm and further processing of HR series are presented. PMID:16779111

  14. Execution and pauses in writing narratives: processing time, cognitive effort and typing skill.

    PubMed

    Alves, Rui Alexandre; Castro, São Luís; Olive, Thierry

    2008-12-01

    At the behavioural level, the activity of a writer can be described as periods of typing separated by pauses. Although some studies have been concerned with the functions of pauses, few have investigated motor execution periods. Precise estimates of the distribution of writing processes, and their cognitive demands, across periods of typing and pauses are lacking. Furthermore, it is uncertain how typing skill affects these aspects of writing. We addressed these issues, selecting writers of low and high typing skill who performed dictation and composition tasks. The occurrences of writing processes were assessed through directed verbalization, and their cognitive demands were measured through interference in reaction times (IRT). Before writing a narrative, 34 undergraduates learned to categorize examples of introspective thoughts as different types of activities related to writing (planning, translating, or revising). Then, while writing, they responded to random auditory probes, and reported their ongoing activity according to the learned categories. Convergent with previous findings, translating was most often reported, and revising and planning had fewer occurrences. Translating was mostly activated during motor execution, whereas revising and planning were mainly activated during pauses. However, none of the writing processes can be characterized as being typical of pauses, since translating was activated to a similar extent as the other two processes. Regarding cognitive demands, revising is likely to be the most demanding process in narrative writing. Typing skill had an impact on IRTs of motor execution. The demands of execution were greater in the low than in the high typing skill group, but these greater demands did not affect the strategy of writing processes activation. Nevertheless, low typing skill had a detrimental impact on text quality.

  15. Impacts of Time Delays on Distributed Algorithms for Economic Dispatch

    SciTech Connect

    Yang, Tao; Wu, Di; Sun, Yannan; Lian, Jianming

    2015-07-26

    Economic dispatch problem (EDP) is an important problem in power systems. It can be formulated as an optimization problem with the objective to minimize the total generation cost subject to the power balance constraint and generator capacity limits. Recently, several consensus-based algorithms have been proposed to solve EDP in a distributed manner. However, impacts of communication time delays on these distributed algorithms are not fully understood, especially for the case where the communication network is directed, i.e., the information exchange is unidirectional. This paper investigates communication time delay effects on a distributed algorithm for directed communication networks. The algorithm has been tested by applying time delays to different types of information exchange. Several case studies are carried out to evaluate the effectiveness and performance of the algorithm in the presence of time delays in communication networks. It is found that time delay effects have negative effects on the convergence rate, and can even result in an incorrect converge value or fail the algorithm to converge.

  16. Influence of timing algorithm on brachialankle pulse wave velocity measurement.

    PubMed

    Sun, Xin; Li, Ke; Ren, Hongwei; Li, Peng; Wang, Xinpei; Liu, Changchun

    2014-01-01

    The baPWV measurement is a non-invasive and convenient technique in an assessment of arterial stiffness. Despite its widespread application, the influence of different timing algorithms is still unclear. The present study was conducted to investigate the influence of six timing algorithms (MIN, MAX, D1, D2, MDP and INS) on the baPWV measurement and to evaluate the performance of them. Forty-five CAD patients and fifty-five healthy subjects were recruited in this study. A PVR acquisition apparatus was built up for baPWV measurement. The baPWV and other related parameters were calculated separately by the six timing algorithms. The influence and performance of the six algorithms was analyzed. The six timing algorithms generate significantly different baPWV values (left: F=29.036, P<0.001; right: F=40.076, P<0.001). In terms of reproducibility, the MAX has significantly higher CV value (≥ 18.6%) than the other methods, while the INS has the lowest CV value (≤ 2.7%). On the performance of classification, the INS produces the highest AUC values (left: 0.854; right: 0.872). The MIN and D2 also have a passable performance (AUC > 0.8). The choice of timing algorithm affects baPWV values and the quality of measurement. The INS method is recommended for baPWV measurement.

  17. A software architecture for hard real-time execution of automatically synthesized plans or control laws

    NASA Technical Reports Server (NTRS)

    Schoppers, Marcel

    1994-01-01

    The design of a flexible, real-time software architecture for trajectory planning and automatic control of redundant manipulators is described. Emphasis is placed on a technique of designing control systems that are both flexible and robust yet have good real-time performance. The solution presented involves an artificial intelligence algorithm that dynamically reprograms the real-time control system while planning system behavior.

  18. Algorithmic properties of the midpoint predictor-corrector time integrator.

    SciTech Connect

    Rider, William J.; Love, Edward; Scovazzi, Guglielmo

    2009-03-01

    Algorithmic properties of the midpoint predictor-corrector time integration algorithm are examined. In the case of a finite number of iterations, the errors in angular momentum conservation and incremental objectivity are controlled by the number of iterations performed. Exact angular momentum conservation and exact incremental objectivity are achieved in the limit of an infinite number of iterations. A complete stability and dispersion analysis of the linearized algorithm is detailed. The main observation is that stability depends critically on the number of iterations performed.

  19. Real-time Algorithms for Sparse Neuronal System Identification.

    PubMed

    Sheikhattar, Alireza; Babadi, Behtash

    2016-08-01

    We consider the problem of sparse adaptive neuronal system identification, where the goal is to estimate the sparse time-varying neuronal model parameters in an online fashion from neural spiking observations. We develop two adaptive filters based on greedy estimation techniques and regularized log-likelihood maximization. We apply the proposed algorithms to simulated spiking data as well as experimentally recorded data from the ferret's primary auditory cortex during performance of auditory tasks. Our results reveal significant performance gains achieved by the proposed algorithms in terms of sparse identification and trackability, compared to existing algorithms.

  20. A distributed scheduling algorithm for heterogeneous real-time systems

    NASA Technical Reports Server (NTRS)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  1. Algorithm for precision subsample timing between Gaussian-like pulses.

    PubMed

    Lerche, R A; Golick, B P; Holder, J P; Kalantar, D H

    2010-10-01

    Moderately priced oscilloscopes available for the NIF power sensors and target diagnostics have 6 GHz bandwidths at 20-25 Gsamples/s (40 ps sample spacing). Some NIF experiments require cross timing between instruments be determined with accuracy better than 30 ps. A simple analysis algorithm for Gaussian-like pulses such as the 100-ps-wide NIF timing fiducial can achieve single-event cross-timing precision of 1 ps (1/50 of the sample spacing). The midpoint-timing algorithm is presented along with simulations that show why the technique produces good timing results. Optimum pulse width is found to be ∼2.5 times the sample spacing. Experimental measurements demonstrate use of the technique and highlight the conditions needed to obtain optimum timing performance.

  2. Architecture and data processing alternatives for the TSE computer. Volume 3: Execution of a parallel counting algorithm using array logic (Tse) devices

    NASA Technical Reports Server (NTRS)

    Metcalfe, A. G.; Bodenheimer, R. E.

    1976-01-01

    A parallel algorithm for counting the number of logic-l elements in a binary array or image developed during preliminary investigation of the Tse concept is described. The counting algorithm is implemented using a basic combinational structure. Modifications which improve the efficiency of the basic structure are also presented. A programmable Tse computer structure is proposed, along with a hardware control unit, Tse instruction set, and software program for execution of the counting algorithm. Finally, a comparison is made between the different structures in terms of their more important characteristics.

  3. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    SciTech Connect

    Ha, Taeyoung . E-mail: tyha@math.snu.ac.kr; Shin, Changsoo . E-mail: css@model.snu.ac.kr

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.

  4. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    NASA Astrophysics Data System (ADS)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nédélec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.

  5. Virtual instrumentation and real-time executive dashboards. Solutions for health care systems.

    PubMed

    Rosow, Eric; Adam, Joseph; Coulombe, Kathleen; Race, Kathleen; Anderson, Rhonda

    2003-01-01

    Successful organizations have the ability to measure and act on key indicators and events in real time. By leveraging the power of virtual instrumentation and open architecture standards, multidimensional executive dashboards can empower health care organizations to make better and faster data-driven decisions. This article will highlight how user-defined virtual instruments and dashboards can connect to hospital information systems (e.g., admissions/discharge/transfer systems, patient monitoring networks) and use statistical process control to "visualize" information and make timely, data-driven decisions. The case studies described will illustrate enterprisewide solutions for: bed management and census control, operational management, data mining and business intelligence applications, and clinical applications (physiological data acquisition and wound measurement and analysis).

  6. Time scale algorithms for an inhomogeneous group of atomic clocks

    NASA Technical Reports Server (NTRS)

    Jacques, C.; Boulanger, J.-S.; Douglas, R. J.; Morris, D.; Cundy, S.; Lam, H. F.

    1993-01-01

    Through the past 17 years, the time scale requirements at the National Research Council (NRC) have been met by the unsteered output of its primary laboratory cesium clocks, supplemented by hydrogen masers when short-term stability better than 2 x 10(exp -12)tau(sup -1/2) has been required. NRC now operates three primary laboratory cesium clocks, three hydrogen masers, and two commercial cesium clocks. NRC has been using ensemble averages for internal purposes for the past several years, and has a realtime algorithm operating on the outputs of its high-resolution (2 x 10(exp -13) s at 1 s) phase comparators. The slow frequency drift of the hydrogen masers has presented difficulties in incorporating their short-term stability into the ensemble average, while retaining the long-term stability of the laboratory cesium frequency standards. We report on this work on algorithms for an inhomogeneous ensemble of atomic clocks, and on our initial work on time scale algorithms that could incorporate frequency calibrations at NRC from the next generation of Zacharias fountain cesium frequency standards having frequency accuracies that might surpass 10(exp -15), or from single-trapped-ion frequency standards (Ba+, Sr+,...) with even higher potential accuracies. The requirements for redundancy in all the elements (including the algorithms) of an inhomogeneous ensemble that would give a robust real-time output of the algorithms are presented and discussed.

  7. Time for a Change: The Promise of Extended Time Schools for Promoting Student Achievement. Executive Summary

    ERIC Educational Resources Information Center

    Farbman, David; Kaplan, Claire

    2005-01-01

    Massachusetts 2020 is a nonprofit operating foundation with a mission to expand educational and economic opportunities for children and families across Massachusetts. Massachusetts 2020, with support from the L.G. Balfour Foundation, a Bank of America Company, set out to understand how a select group of extended-time schools in Massachusetts and…

  8. HMC algorithm with multiple time scale integration and mass preconditioning

    NASA Astrophysics Data System (ADS)

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  9. Study on the Algorithm of Local Atomic Time

    NASA Astrophysics Data System (ADS)

    Li, B.; Qu, L. L.; Gao, Y. P.; Hu, Y. H.

    2010-10-01

    It is always an endless target for all time and frequency laboratories to develop, own and keep a stable, accurate and reliable time scale. As a comparatively mature algorithm, ALGOS, which has been concerned about the long-term stability of the time scale, is widely used by the majority of time laboratories. For ALGOS, the weights are assumed on the basis of the frequencies of 12 months and the present month interval is included in the computation. This procedure uses clock measurements covering 12 months, so annual frequency variations and long-term drifts can lead to de-weight. This helps to decrease the seasonal variation of the time scale and improve its long-term stability. However, the local atomic time scale is primarily concerned with long-term stability not more than 60 days. So when the local time scale is computed with ALGOS in time laboratories, it is necessary to modify ALGOS correspondingly according to the performances of contributing clocks, the requirement of stability for local time scale and so on. There are 22 high performance atomic clocks at National Time Service Center, Chinese Academy of Sciences (NTSC). They include 18 cesium standards and 4 hydrogen masers. Because hydrogen masers behave poor, we only regard an ensemble of 18 cesium clocks in our improved algorithm. The performances of these clocks are very similar, and the number is less than 20. By analyzing and studying the noise models of atomic clocks, this paper presents a complete improved algorithm of TA(NTSC). This improved TA(NTSC) algorithm includes three aspects: the selection of the maximum weight, the selection of clocks taking part in TA(NTSC) computation and the estimation of the weights of contributing clocks. We validate the new algorithm with the annually atomic clock comparative data of NTSC taking part in TAI computation in 2008. The results show that the long-term and short-term stabilities of TA(NTSC) are all improved. This conclusion is based on the clock

  10. Lidar detection algorithm for time and range anomalies

    NASA Astrophysics Data System (ADS)

    Ben-David, Avishai; Davidson, Charles E.; Vanderbeek, Richard G.

    2007-10-01

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t1 to t2" is addressed, and for range anomaly where the question "is a target present at time t within ranges R1 and R2" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO2 lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  11. Timescape: a simple space-time interpolation geostatistical Algorithm

    NASA Astrophysics Data System (ADS)

    Ciolfi, Marco; Chiocchini, Francesca; Gravichkova, Olga; Pisanelli, Andrea; Portarena, Silvia; Scartazza, Andrea; Brugnoli, Enrico; Lauteri, Marco

    2016-04-01

    Environmental sciences include both time and space variability in their datasets. Some established tools exist for both spatial interpolation and time series analysis alone, but mixing space and time variability calls for compromise: Researchers are often forced to choose which is the main source of variation, neglecting the other. We propose a simple algorithm, which can be used in many fields of Earth and environmental sciences when both time and space variability must be considered on equal grounds. The algorithm has already been implemented in Java language and the software is currently available at https://sourceforge.net/projects/timescapeglobal/ (it is published under GNU-GPL v3.0 Free Software License). The published version of the software, Timescape Global, is focused on continent- to Earth-wide spatial domains, using global longitude-latitude coordinates for samples localization. The companion Timescape Local software is currently under development ad will be published with an open license as well; it will use projected coordinates for a local to regional space scale. The basic idea of the Timescape Algorithm consists in converting time into a sort of third spatial dimension, with the addition of some causal constraints, which drive the interpolation including or excluding observations according to some user-defined rules. The algorithm is applicable, as a matter of principle, to anything that can be represented with a continuous variable (a scalar field, technically speaking). The input dataset should contain position, time and observed value of all samples. Ancillary data can be included in the interpolation as well. After the time-space conversion, Timescape follows basically the old-fashioned IDW (Inverse Distance Weighted) interpolation Algorithm, although users have a wide choice of customization options that, at least partially, overcome some of the known issues of IDW. The three-dimensional model produced by the Timescape Algorithm can be

  12. FPGA-based real-time phase measuring profilometry algorithm design and implementation

    NASA Astrophysics Data System (ADS)

    Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng

    2016-11-01

    Phase measuring profilometry (PMP) has been widely used in many fields, like Computer Aided Verification (CAV), Flexible Manufacturing System (FMS) et al. High frame-rate (HFR) real-time vision-based feedback control will be a common demands in near future. However, the instruction time delay in the computer caused by numerous repetitive operations greatly limit the efficiency of data processing. FPGA has the advantages of pipeline architecture and parallel execution, and it fit for handling PMP algorithm. In this paper, we design a fully pipelined hardware architecture for PMP. The functions of hardware architecture includes rectification, phase calculation, phase shifting, and stereo matching. The experiment verified the performance of this method, and the factors that may influence the computation accuracy was analyzed.

  13. The role of sleep continuity and total sleep time in executive function across the adult lifespan

    PubMed Central

    Wilckens, Kristine A.; Woo, Sarah G.; Kirk, Afton R.; Erickson, Kirk I.; Wheeler, Mark E.

    2015-01-01

    The importance of sleep for cognition in young adults is well established, but the role of habitual sleep behavior in cognition across the adult lifespan remains unknown. We examined the relationship between sleep continuity and total sleep time assessed with a sleep detection device and cognitive performance using a battery of tasks in young (n = 59, mean age = 23.05) and older (n = 53, mean age = 62.68) adults. Across age groups, higher sleep continuity was associated with better cognitive performance. In the younger group, higher sleep continuity was associated with better working memory and inhibitory control. In the older group, higher sleep continuity was associated with better inhibitory control, memory recall, and verbal fluency. Very short and very long total sleep time was associated with poorer working memory and verbal fluency, specifically in the younger group. Total sleep time was not associated with cognitive performance in any domains for the older group. These findings reveal that sleep continuity is important for executive function in both young and older adults, but total sleep time may be more important for cognition in young adults. PMID:25244484

  14. Appropriate Algorithms for Nonlinear Time Series Analysis in Psychology

    NASA Astrophysics Data System (ADS)

    Scheier, Christian; Tschacher, Wolfgang

    Chaos theory has a strong appeal for psychology because it allows for the investigation of the dynamics and nonlinearity of psychological systems. Consequently, chaos-theoretic concepts and methods have recently gained increasing attention among psychologists and positive claims for chaos have been published in nearly every field of psychology. Less attention, however, has been paid to the appropriateness of chaos-theoretic algorithms for psychological time series. An appropriate algorithm can deal with short, noisy data sets and yields `objective' results. In the present paper it is argued that most of the classical nonlinear techniques don't satisfy these constraints and thus are not appropriate for psychological data. A methodological approach is introduced that is based on nonlinear forecasting and the method of surrogate data. In artificial data sets and empirical time series we can show that this methodology reliably assesses nonlinearity and chaos in time series even if they are short and contaminated by noise.

  15. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  16. Executive management studies: the application of real-time science in health administration education.

    PubMed

    Stone, Tamara T; Brown, Gordon D; Mantese, Annamarie

    2005-01-01

    While sound scientific research, such as randomized controlled trials (RCTs), has produced findings leading to significant gains in healthcare, real-time science learning gives administrators and providers a way of responding to immediate need and rapid change while improving performance and the quality of care delivered. Real-time science learning is a cycle of team reflection on and exchange of theory and practical knowledge that produces many benefits for the individual, the organization, and the healthcare field. By questioning principles and analyzing information, teams generate recommendations for organizational improvement as well as develop their individual abilities to address other unforeseen demands in differentcontexts. All of this serves as a foundation for more rigorous scientific research that leads to the advancement of the healthcare field. This article shows how the Department of Health Management and Informatics at the University of Missouri-Columbia adapted real-time science into the Executive Management Study (EMS) requirement of the Master of Health Administration (M.H.A.) and the Master of Science in Health Informatics (M.S.) curriculums. The process is represented by a cycle of Health Administration Education, experienced through a Practical Application, which leads to the creation and dissemination of information and Research Advancing the Field.

  17. Redundant and fault-tolerant algorithms for real-time measurement and control systems for weapon equipment.

    PubMed

    Li, Dan; Hu, Xiaoguang

    2017-03-01

    Because of the high availability requirements from weapon equipment, an in-depth study has been conducted on the real-time fault-tolerance of the widely applied Compact PCI (CPCI) bus measurement and control system. A redundancy design method that uses heartbeat detection to connect the primary and alternate devices has been developed. To address the low successful execution rate and relatively large waste of time slices in the primary version of the task software, an improved algorithm for real-time fault-tolerant scheduling is proposed based on the Basic Checking available time Elimination idle time (BCE) algorithm, applying a single-neuron self-adaptive proportion sum differential (PSD) controller. The experimental validation results indicate that this system has excellent redundancy and fault-tolerance, and the newly developed method can effectively improve the system availability.

  18. Efficient quantum algorithm for computing n-time correlation functions.

    PubMed

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  19. Pseudo-time algorithms for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, E.

    1986-01-01

    A pseudo-time method is introduced to integrate the compressible Navier-Stokes equations to a steady state. This method is a generalization of a method used by Crocco and also by Allen and Cheng. We show that for a simple heat equation that this is just a renormalization of the time. For a convection-diffusion equation the renormalization is dependent only on the viscous terms. We implement the method for the Navier-Stokes equations using a Runge-Kutta type algorithm. This permits the time step to be chosen based on the inviscid model only. We also discuss the use of residual smoothing when viscous terms are present.

  20. Parallel machine scheduling with step-deteriorating jobs and setup times by a hybrid discrete cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Peng; Cheng, Wenming; Wang, Yi

    2015-11-01

    This article considers the parallel machine scheduling problem with step-deteriorating jobs and sequence-dependent setup times. The objective is to minimize the total tardiness by determining the allocation and sequence of jobs on identical parallel machines. In this problem, the processing time of each job is a step function dependent upon its starting time. An individual extended time is penalized when the starting time of a job is later than a specific deterioration date. The possibility of deterioration of a job makes the parallel machine scheduling problem more challenging than ordinary ones. A mixed integer programming model for the optimal solution is derived. Due to its NP-hard nature, a hybrid discrete cuckoo search algorithm is proposed to solve this problem. In order to generate a good initial swarm, a modified Biskup-Hermann-Gupta (BHG) heuristic called MBHG is incorporated into the population initialization. Several discrete operators are proposed in the random walk of Lévy flights and the crossover search. Moreover, a local search procedure based on variable neighbourhood descent is integrated into the algorithm as a hybrid strategy in order to improve the quality of elite solutions. Computational experiments are executed on two sets of randomly generated test instances. The results show that the proposed hybrid algorithm can yield better solutions in comparison with the commercial solver CPLEX® with a one hour time limit, the discrete cuckoo search algorithm and the existing variable neighbourhood search algorithm.

  1. Measuring executive function in control subjects and TBI patients with question completion time (QCT)

    PubMed Central

    Woods, David L.; Yund, E. William; Wyma, John M.; Ruff, Ron; Herron, Timothy J.

    2015-01-01

    Questionnaire completion is a complex task that places demands on cognitive functions subserving reading, introspective memory, decision-making, and motor control. Although computerized questionnaires and surveys are used with increasing frequency in clinical practice, few studies have examined question completion time (QCT), the time required to complete each question. Here, we analyzed QCTs in 172 control subjects and 31 patients with traumatic brain injury (TBI) who completed two computerized questionnaires, the 17-question Post-Traumatic Stress Disorder (PTSD) Checklist (PCL) and the 25-question Cognitive Failures Questionnaire (CFQ). In control subjects, robust correlations were found between self-paced QCTs on the PCL and CFQ (r = 0.82). QCTs on individual questions correlated strongly with the number of words in the question, indicating the critical role of reading speed. QCTs increased significantly with age, and were reduced in females and in subjects with increased education and computer experience. QCT z-scores, corrected for age, education, computer use, and sex, correlated more strongly with each other than with the results of other cognitive tests. Patients with a history of severe TBI showed significantly delayed QCTs, but QCTs fell within the normal range in patients with a history of mild TBI. When questionnaires are used to gather relevant patient information, simultaneous QCT measures provide reliable and clinically sensitive measures of processing speed and executive function. PMID:26042021

  2. Real-time implementation of a traction control algorithm on a scaled roller rig

    NASA Astrophysics Data System (ADS)

    Bosso, N.; Zampieri, N.

    2013-04-01

    Traction control is a very important aspect in railway vehicle dynamics. Its optimisation allows improvement of the performance of a locomotive by working close to the limit of adhesion. On the other hand, in case the adhesion limit is surpassed, the wheels are subjected to heavy wear and there is also a big risk that vibrations in the traction occur. Similar considerations can be made in the case of braking. The development and optimisation of a traction/braking control algorithm is a complex activity, because it is usually performed on a real vehicle on the track, where many uncertainties are present due to environmental conditions and vehicle characteristics. This work shows the use of a scaled roller rig to develop and optimise a traction control algorithm on a single wheelset. Measurements performed on the wheelset are used to estimate the optimal adhesion forces by means of a wheel/rail contact algorithm executed in real time. This allows application of the optimal adhesion force.

  3. New Efficient Sparse Space Time Algorithms for Superparameterization on Mesoscales

    SciTech Connect

    Xing, Yulong; Majda, Andrew J.; Grabowski, Wojciech W.

    2009-12-01

    Superparameterization (SP) is a large-scale modeling system with explicit representation of small-scale and mesoscale processes provided by a cloud-resolving model (CRM) embedded in each column of a large-scale model. New efficient sparse space-time algorithms based on the original idea of SP are presented. The large-scale dynamics are unchanged, but the small-scale model is solved in a reduced spatially periodic domain to save the computation cost following a similar idea applied by one of the authors for aquaplanet simulations. In addition, the time interval of integration of the small-scale model is reduced systematically for the same purpose, which results in a different coupling mechanism between the small- and large-scale models. The new algorithms have been applied to a stringent two-dimensional test suite involving moist convection interacting with shear with regimes ranging from strong free and forced squall lines to dying scattered convection as the shear strength varies. The numerical results are compared with the CRM and original SP. It is shown here that for all of the regimes of propagation and dying scattered convection, the large-scale variables such as horizontal velocity and specific humidity are captured in a statistically accurate way (pattern correlations above 0.75) based on space-time reduction of the small-scale models by a factor of 1/3; thus, the new efficient algorithms for SP result in a gain of roughly a factor of 10 in efficiency while retaining a statistical accuracy on the large-scale variables. Even the models with 1/6 reduction in space-time with a gain of 36 in efficiency are able to distinguish between propagating squall lines and dying scattered convection with a pattern correlation above 0.6 for horizontal velocity and specific humidity. These encouraging results suggest the possibility of using these efficient new algorithms for limited-area mesoscale ensemble forecasting.

  4. Real-Time Distributed Algorithms for Visual and Battlefield Reasoning

    DTIC Science & Technology

    2006-08-01

    in many fields (e.g. designers of disk servers try to merge requests to read addresses on disk to reduce the search time on disk; designers of... designed algorithms to optimally split the set of N task conditions into such buckets. • We then analyzed the complexity of this problem and...in the preceding sections. The STM is a collection of modules documenting this effort. The STM modules themselves involve the design and

  5. Multiple mobile robots real-time visual search algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Caixia; Zhan, Qiang

    2010-08-01

    A multiple mobile robots visual real-time locating system is introduced, in which the global search algorithm and track search algorithm are combined together to identify the real-time position and orientation(pose) of multiple mobile robots. The switching strategy between the two algorithms is given to ensure the accuracy and improve retrieval speed. The grid search approach is used to identify target while searching globally. By checking the location in the previous frame, the maximum speed and the frame time interval, thus the track search can determine the area target robot may appear in the next frame. Then, a new search will be performed in the certain area. The global search is used if target robot is not found in the previous search otherwise track search will be used. With the experiment on the static and dynamic recognition of three robots, the search method here is proved to be high precise, fast, stable and easy to extend, all the design requirements can be well met.

  6. Identifying Time Measurement Tampering in the Traversal Time and Hop Count Analysis (TTHCA) Wormhole Detection Algorithm

    PubMed Central

    Karlsson, Jonny; Dooley, Laurence S.; Pulkkis, Göran

    2013-01-01

    Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ΔT Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ΔT Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143

  7. Identifying time measurement tampering in the traversal time and hop count analysis (TTHCA) wormhole detection algorithm.

    PubMed

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2013-05-17

    Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm.

  8. The vigilance decrement in executive function is attenuated when individual chronotypes perform at their optimal time of day.

    PubMed

    Lara, Tania; Madrid, Juan Antonio; Correa, Ángel

    2014-01-01

    Time of day modulates our cognitive functions, especially those related to executive control, such as the ability to inhibit inappropriate responses. However, the impact of individual differences in time of day preferences (i.e. morning vs. evening chronotype) had not been considered by most studies. It was also unclear whether the vigilance decrement (impaired performance with time on task) depends on both time of day and chronotype. In this study, morning-type and evening-type participants performed a task measuring vigilance and response inhibition (the Sustained Attention to Response Task, SART) in morning and evening sessions. The results showed that the vigilance decrement in inhibitory performance was accentuated at non-optimal as compared to optimal times of day. In the morning-type group, inhibition performance decreased linearly with time on task only in the evening session, whereas in the morning session it remained more accurate and stable over time. In contrast, inhibition performance in the evening-type group showed a linear vigilance decrement in the morning session, whereas in the evening session the vigilance decrement was attenuated, following a quadratic trend. Our findings imply that the negative effects of time on task in executive control can be prevented by scheduling cognitive tasks at the optimal time of day according to specific circadian profiles of individuals. Therefore, time of day and chronotype influences should be considered in research and clinical studies as well as real-word situations demanding executive control for response inhibition.

  9. The Vigilance Decrement in Executive Function Is Attenuated When Individual Chronotypes Perform at Their Optimal Time of Day

    PubMed Central

    Lara, Tania; Madrid, Juan Antonio; Correa, Ángel

    2014-01-01

    Time of day modulates our cognitive functions, especially those related to executive control, such as the ability to inhibit inappropriate responses. However, the impact of individual differences in time of day preferences (i.e. morning vs. evening chronotype) had not been considered by most studies. It was also unclear whether the vigilance decrement (impaired performance with time on task) depends on both time of day and chronotype. In this study, morning-type and evening-type participants performed a task measuring vigilance and response inhibition (the Sustained Attention to Response Task, SART) in morning and evening sessions. The results showed that the vigilance decrement in inhibitory performance was accentuated at non-optimal as compared to optimal times of day. In the morning-type group, inhibition performance decreased linearly with time on task only in the evening session, whereas in the morning session it remained more accurate and stable over time. In contrast, inhibition performance in the evening-type group showed a linear vigilance decrement in the morning session, whereas in the evening session the vigilance decrement was attenuated, following a quadratic trend. Our findings imply that the negative effects of time on task in executive control can be prevented by scheduling cognitive tasks at the optimal time of day according to specific circadian profiles of individuals. Therefore, time of day and chronotype influences should be considered in research and clinical studies as well as real-word situations demanding executive control for response inhibition. PMID:24586404

  10. Space-time spectral collocation algorithm for solving time-fractional Tricomi-type equations

    NASA Astrophysics Data System (ADS)

    Abdelkawy, M. A.; Ahmed, Engy A.; Alqahtani, Rubayyi T.

    2016-01-01

    We introduce a new numerical algorithm for solving one-dimensional time-fractional Tricomi-type equations (T-FTTEs). We used the shifted Jacobi polynomials as basis functions and the derivatives of fractional is evaluated by the Caputo definition. The shifted Jacobi Gauss-Lobatt algorithm is used for the spatial discretization, while the shifted Jacobi Gauss-Radau algorithmis applied for temporal approximation. Substituting these approximations in the problem leads to a system of algebraic equations that greatly simplifies the problem. The proposed algorithm is successfully extended to solve the two-dimensional T-FTTEs. Extensive numerical tests illustrate the capability and high accuracy of the proposed methodologies.

  11. Efficient multiple time-stepping algorithms of higher order

    NASA Astrophysics Data System (ADS)

    Demirel, Abdullah; Niegemann, Jens; Busch, Kurt; Hochbruck, Marlis

    2015-03-01

    Multiple time-stepping (MTS) algorithms allow to efficiently integrate large systems of ordinary differential equations, where a few stiff terms restrict the timestep of an otherwise non-stiff system. In this work, we discuss a flexible class of MTS techniques, based on multistep methods. Our approach contains several popular methods as special cases and it allows for the easy construction of novel and efficient higher-order MTS schemes. In addition, we demonstrate how to adapt the stability contour of the non-stiff time-integration to the physical system at hand. This allows significantly larger timesteps when compared to previously known multistep MTS approaches. As an example, we derive novel predictor-corrector (PCMTS) schemes specifically optimized for the time-integration of damped wave equations on locally refined meshes. In a set of numerical experiments, we demonstrate the performance of our scheme on discontinuous Galerkin time-domain (DGTD) simulations of Maxwell's equations.

  12. A Time Series Approach to Random Number Generation: Using Recurrence Quantification Analysis to Capture Executive Behavior

    PubMed Central

    Oomens, Wouter; Maes, Joseph H. R.; Hasselman, Fred; Egger, Jos I. M.

    2015-01-01

    The concept of executive functions plays a prominent role in contemporary experimental and clinical studies on cognition. One paradigm used in this framework is the random number generation (RNG) task, the execution of which demands aspects of executive functioning, specifically inhibition and working memory. Data from the RNG task are best seen as a series of successive events. However, traditional RNG measures that are used to quantify executive functioning are mostly summary statistics referring to deviations from mathematical randomness. In the current study, we explore the utility of recurrence quantification analysis (RQA), a non-linear method that keeps the entire sequence intact, as a better way to describe executive functioning compared to traditional measures. To this aim, 242 first- and second-year students completed a non-paced RNG task. Principal component analysis of their data showed that traditional and RQA measures convey more or less the same information. However, RQA measures do so more parsimoniously and have a better interpretation. PMID:26097449

  13. Two algorithms to fill cloud gaps in LST time series

    NASA Astrophysics Data System (ADS)

    Frey, Corinne; Kuenzer, Claudia

    2013-04-01

    Cloud contamination is a challenge for optical remote sensing. This is especially true for the recording of a fast changing radiative quantity like land surface temperature (LST). The substitution of cloud contaminated pixels with estimated values - gap filling - is not straightforward but possible to a certain extent, as this research shows for medium-resolution time series of MODIS data. Area of interest is the Upper Mekong Delta (UMD). The background for this work is an analysis of the temporal development of 1-km LST in the context of the WISDOM project. The climate of the UMD is characterized by peak rainfalls in the summer months, which is also the time where cloud contamination is highest in the area. Average number of available daytime observations per pixel can go down to less than five for example in the month of June. In winter the average number may reach 25 observations a month. This situation is not appropriate to the calculation of longterm statistics; an adequate gap filling method should be used beforehand. In this research, two different algorithms were tested on an 11 year time series: 1) a gradient based algorithm and 2) a method based on ECMWF era interim re-analysis data. The first algorithm searches for stable inter-image gradients from a given environment and for a certain period of time. These gradients are then used to estimate LST for cloud contaminated pixels in each acquisition. The estimated LSTs are clear-sky LSTs and solely based on the MODIS LST time series. The second method estimates LST on the base of adapted ECMWF era interim skin temperatures and creates a set of expected LSTs. The estimated values were used to fill the gaps in the original dataset, creating two new daily, 1 km datasets. The maps filled with the gradient based method had more than the double amount of valid pixels than the original dataset. The second method (ECMWF era interim based) was able to fill all data gaps. From the gap filled data sets then monthly

  14. A time-efficient algorithm for implementing the Catmull-Clark subdivision method

    NASA Astrophysics Data System (ADS)

    Ioannou, G.; Savva, A.; Stylianou, V.

    2015-10-01

    Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.

  15. Executive Function and Mathematics Achievement: Are Effects Construct- and Time-General or Specific?

    ERIC Educational Resources Information Center

    Duncan, Robert; Nguyen, Tutrang; Miao, Alicia; McClelland, Megan; Bailey, Drew

    2016-01-01

    Executive function (EF) is considered a set of interrelated cognitive processes, including inhibitory control, working memory, and attentional shifting, that are connected to the development of the prefrontal cortex and contribute to children's problem solving skills and self regulatory behavior (Best & Miller, 2010; Garon, Bryson, &…

  16. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; Nowak, M. A.

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  17. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    PubMed Central

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  18. Enhancing time-series detection algorithms for automated biosurveillance.

    PubMed

    Tokars, Jerome I; Burkom, Howard; Xing, Jian; English, Roseanne; Bloom, Steven; Cox, Kenneth; Pavlin, Julie A

    2009-04-01

    BioSense is a US national system that uses data from health information systems for automated disease surveillance. We studied 4 time-series algorithm modifications designed to improve sensitivity for detecting artificially added data. To test these modified algorithms, we used reports of daily syndrome visits from 308 Department of Defense (DoD) facilities and 340 hospital emergency departments (EDs). At a constant alert rate of 1%, sensitivity was improved for both datasets by using a minimum standard deviation (SD) of 1.0, a 14-28 day baseline duration for calculating mean and SD, and an adjustment for total clinic visits as a surrogate denominator. Stratifying baseline days into weekdays versus weekends to account for day-of-week effects increased sensitivity for the DoD data but not for the ED data. These enhanced methods may increase sensitivity without increasing the alert rate and may improve the ability to detect outbreaks by using automated surveillance system data.

  19. Computer Algorithms and Architectures for Three-Dimensional Eddy-Current Nondestructive Evaluation. Volume 1. Executive Summary

    DTIC Science & Technology

    1989-01-20

    LLAA6 .l iI -SA/TR-2/89 A003: FINAL REPORT * COMPUTER ALGORITHMS AND ARCHITECTURES N FOR THREE-DIMENSIONAL EDDY-CURRENT NONDESTRUCTIVE EVALUATION...Ciasuication) COMPUTER ALGORITHMS AND ARCHITECTURES FOR THREE-DIMENSIONAL EDD~j~~JRRN iv ummary Q PERSONAL AUTriOR(S) SBAHASCAE 1 3a. TYPE Of REPORT

  20. Cable Damage Detection System and Algorithms Using Time Domain Reflectometry

    SciTech Connect

    Clark, G A; Robbins, C L; Wade, K A; Souza, P R

    2009-03-24

    This report describes the hardware system and the set of algorithms we have developed for detecting damage in cables for the Advanced Development and Process Technologies (ADAPT) Program. This program is part of the W80 Life Extension Program (LEP). The system could be generalized for application to other systems in the future. Critical cables can undergo various types of damage (e.g. short circuits, open circuits, punctures, compression) that manifest as changes in the dielectric/impedance properties of the cables. For our specific problem, only one end of the cable is accessible, and no exemplars of actual damage are available. This work addresses the detection of dielectric/impedance anomalies in transient time domain reflectometry (TDR) measurements on the cables. The approach is to interrogate the cable using time domain reflectometry (TDR) techniques, in which a known pulse is inserted into the cable, and reflections from the cable are measured. The key operating principle is that any important cable damage will manifest itself as an electrical impedance discontinuity that can be measured in the TDR response signal. Machine learning classification algorithms are effectively eliminated from consideration, because only a small number of cables is available for testing; so a sufficient sample size is not attainable. Nonetheless, a key requirement is to achieve very high probability of detection and very low probability of false alarm. The approach is to compare TDR signals from possibly damaged cables to signals or an empirical model derived from reference cables that are known to be undamaged. This requires that the TDR signals are reasonably repeatable from test to test on the same cable, and from cable to cable. Empirical studies show that the repeatability issue is the 'long pole in the tent' for damage detection, because it is has been difficult to achieve reasonable repeatability. This one factor dominated the project. The two-step model-based approach is

  1. Run-time scheduling and execution of loops on message passing machines

    NASA Technical Reports Server (NTRS)

    Saltz, Joel; Crowley, Kathleen; Mirchandaney, Ravi; Berryman, Harry

    1990-01-01

    Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.

  2. Run-time scheduling and execution of loops on message passing machines

    NASA Technical Reports Server (NTRS)

    Crowley, Kay; Saltz, Joel; Mirchandaney, Ravi; Berryman, Harry

    1989-01-01

    Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.

  3. Solving the time dependent vehicle routing problem by metaheuristic algorithms

    NASA Astrophysics Data System (ADS)

    Johar, Farhana; Potts, Chris; Bennell, Julia

    2015-02-01

    The problem we consider in this study is Time Dependent Vehicle Routing Problem (TDVRP) which has been categorized as non-classical VRP. It is motivated by the fact that multinational companies are currently not only manufacturing the demanded products but also distributing them to the customer location. This implies an efficient synchronization of production and distribution activities. Hence, this study will look into the routing of vehicles which departs from the depot at varies time due to the variation in manufacturing process. We consider a single production line where demanded products are being process one at a time once orders have been received from the customers. It is assumed that order released from the production line will be loaded into scheduled vehicle which ready to be delivered. However, the delivery could only be done once all orders scheduled in the vehicle have been released from the production line. Therefore, there could be lateness on the delivery process from awaiting all customers' order of the route to be released. Our objective is to determine a schedule for vehicle routing that minimizes the solution cost including the travelling and tardiness cost. A mathematical formulation is developed to represent the problem and will be solved by two metaheuristics; Variable Neighborhood Search (VNS) and Tabu Search (TS). These algorithms will be coded in C ++ programming and run using 56's Solomon instances with some modification. The outcome of this experiment can be interpreted as the quality criteria of the different approximation methods. The comparison done shown that VNS gave the better results while consuming reasonable computational efforts.

  4. O(1) time algorithms for computing histogram and Hough transform on a cross-bridge reconfigurable array of processors

    SciTech Connect

    Kao, T.; Horng, S.; Wang, Y.

    1995-04-01

    Instead of using the base-2 number system, we use a base-m number system to represent the numbers used in the proposed algorithms. Such a strategy can be used to design an O(T) time, T = (log(sub m) N) + 1, prefix sum algorithm for a binary sequence with N-bit on a cross-bridge reconfigurable array of processors using N processors, where the data bus is m-bit wide. Then, this basic operation can be used to compute the histogram of an n x n image with G gray-level value in constant time using G x n x n processors, and compute the Hough transform of an image with N edge pixels and n x n parameter space in constant time using n x n x N processors, respectively. This result is better than the previously known results proposed in the literature. Also, the execution time of the proposed algorithms is tunable by the bus bandwidth. 43 refs.

  5. Reconciling fault-tolerant distributed algorithms and real-time computing.

    PubMed

    Moser, Heinrich; Schmid, Ulrich

    We present generic transformations, which allow to translate classic fault-tolerant distributed algorithms and their correctness proofs into a real-time distributed computing model (and vice versa). Owing to the non-zero-time, non-preemptible state transitions employed in our real-time model, scheduling and queuing effects (which are inherently abstracted away in classic zero step-time models, sometimes leading to overly optimistic time complexity results) can be accurately modeled. Our results thus make fault-tolerant distributed algorithms amenable to a sound real-time analysis, without sacrificing the wealth of algorithms and correctness proofs established in classic distributed computing research. By means of an example, we demonstrate that real-time algorithms generated by transforming classic algorithms can be competitive even w.r.t. optimal real-time algorithms, despite their comparatively simple real-time analysis.

  6. Computationally efficient algorithms for real-time attitude estimation

    NASA Technical Reports Server (NTRS)

    Pringle, Steven R.

    1993-01-01

    For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.

  7. A polynomial time biclustering algorithm for finding approximate expression patterns in gene expression time series

    PubMed Central

    Madeira, Sara C; Oliveira, Arlindo L

    2009-01-01

    Background The ability to monitor the change in expression patterns over time, and to observe the emergence of coherent temporal responses using gene expression time series, obtained from microarray experiments, is critical to advance our understanding of complex biological processes. In this context, biclustering algorithms have been recognized as an important tool for the discovery of local expression patterns, which are crucial to unravel potential regulatory mechanisms. Although most formulations of the biclustering problem are NP-hard, when working with time series expression data the interesting biclusters can be restricted to those with contiguous columns. This restriction leads to a tractable problem and enables the design of efficient biclustering algorithms able to identify all maximal contiguous column coherent biclusters. Methods In this work, we propose e-CCC-Biclustering, a biclustering algorithm that finds and reports all maximal contiguous column coherent biclusters with approximate expression patterns in time polynomial in the size of the time series gene expression matrix. This polynomial time complexity is achieved by manipulating a discretized version of the original matrix using efficient string processing techniques. We also propose extensions to deal with missing values, discover anticorrelated and scaled expression patterns, and different ways to compute the errors allowed in the expression patterns. We propose a scoring criterion combining the statistical significance of expression patterns with a similarity measure between overlapping biclusters. Results We present results in real data showing the effectiveness of e-CCC-Biclustering and its relevance in the discovery of regulatory modules describing the transcriptomic expression patterns occurring in Saccharomyces cerevisiae in response to heat stress. In particular, the results show the advantage of considering approximate patterns when compared to state of the art methods that require

  8. Bayesian Optimization Algorithm, Population Sizing, and Time to Convergence

    SciTech Connect

    Pelikan, M.; Goldberg, D.E.; Cantu-Paz, E.

    2000-01-19

    This paper analyzes convergence properties of the Bayesian optimization algorithm (BOA). It settles the BOA into the framework of problem decomposition used frequently in order to model and understand the behavior of simple genetic algorithms. The growth of the population size and the number of generations until convergence with respect to the size of a problem is theoretically analyzed. The theoretical results are supported by a number of experiments.

  9. Identification of regulatory modules in time series gene expression data using a linear time biclustering algorithm.

    PubMed

    Madeira, Sara C; Teixeira, Miguel C; Sá-Correia, Isabel; Oliveira, Arlindo L

    2010-01-01

    Although most biclustering formulations are NP-hard, in time series expression data analysis, it is reasonable to restrict the problem to the identification of maximal biclusters with contiguous columns, which correspond to coherent expression patterns shared by a group of genes in consecutive time points. This restriction leads to a tractable problem. We propose an algorithm that finds and reports all maximal contiguous column coherent biclusters in time linear in the size of the expression matrix. The linear time complexity of CCC-Biclustering relies on the use of a discretized matrix and efficient string processing techniques based on suffix trees. We also propose a method for ranking biclusters based on their statistical significance and a methodology for filtering highly overlapping and, therefore, redundant biclusters. We report results in synthetic and real data showing the effectiveness of the approach and its relevance in the discovery of regulatory modules. Results obtained using the transcriptomic expression patterns occurring in Saccharomyces cerevisiae in response to heat stress show not only the ability of the proposed methodology to extract relevant information compatible with documented biological knowledge but also the utility of using this algorithm in the study of other environmental stresses and of regulatory modules in general.

  10. Reduced variability and execution time to reach a target with a needle GPS system: Comparison between physicians, residents and nurse anaesthetists.

    PubMed

    Fevre, Marie-Cécile; Vincent, Caroline; Picard, Julien; Vighetti, Arnaud; Chapuis, Claire; Detavernier, Maxime; Allenet, Benoît; Payen, Jean-François; Bosson, Jean-Luc; Albaladejo, Pierre

    2016-09-19

    Ultrasound (US) guided needle positioning is safer than anatomical landmark techniques for central venous access. Hand-eye coordination and execution time depend on the professional's ability, previous training and personal skills. Needle guidance positioning systems (GPS) may theoretically reduce execution time and facilitate needle positioning in specific targets, thus improving patient comfort and safety. Three groups of healthcare professionals (41 anaesthesiologists and intensivists, 41 residents in anaesthesiology and intensive care, 39 nurse anaesthetists) were included and required to perform 3 tasks (positioning the tip of a needle in three different targets in a silicon phantom) by using successively a conventional US-guided needle positioning and a needle GPS. We measured execution times to perform the tasks, hand-eye coordination and the number of repositioning occurrences or errors in handling the needle or the probe. Without the GPS system, we observed a significant inter-individual difference for execution time (P<0.05), hand-eye coordination and the number of errors/needle repositioning between physicians, residents and nurse anaesthetists. US training and video gaming were found to be independent factors associated with a shorter execution time. Use of GPS attenuated the inter-individual and group variability. We observed a reduced execution time and improved hand-eye coordination in all groups as compared to US without GPS. Neither US training, video gaming nor demographic personal or professional factors were found to be significantly associated with reduced execution time when GPS was used. US associated with GPS systems may improve safety and decrease execution time by reducing inter-individual variability between professionals for needle-handling procedures.

  11. Executive Functions

    PubMed Central

    Diamond, Adele

    2014-01-01

    Executive functions (EFs) make possible mentally playing with ideas; taking the time to think before acting; meeting novel, unanticipated challenges; resisting temptations; and staying focused. Core EFs are inhibition [response inhibition (self-control—resisting temptations and resisting acting impulsively) and interference control (selective attention and cognitive inhibition)], working memory, and cognitive flexibility (including creatively thinking “outside the box,” seeing anything from different perspectives, and quickly and flexibly adapting to changed circumstances). The developmental progression and representative measures of each are discussed. Controversies are addressed (e.g., the relation between EFs and fluid intelligence, self-regulation, executive attention, and effortful control, and the relation between working memory and inhibition and attention). The importance of social, emotional, and physical health for cognitive health is discussed because stress, lack of sleep, loneliness, or lack of exercise each impair EFs. That EFs are trainable and can be improved with practice is addressed, including diverse methods tried thus far. PMID:23020641

  12. Scaling Time Warp-based Discrete Event Execution to 104 Processors on Blue Gene Supercomputer

    SciTech Connect

    Perumalla, Kalyan S

    2007-01-01

    Lately, important large-scale simulation applications, such as emergency/event planning and response, are emerging that are based on discrete event models. The applications are characterized by their scale (several millions of simulated entities), their fine-grained nature of computation (microseconds per event), and their highly dynamic inter-entity event interactions. The desired scale and speed together call for highly scalable parallel discrete event simulation (PDES) engines. However, few such parallel engines have been designed or tested on platforms with thousands of processors. Here an overview is given of a unique PDES engine that has been designed to support Time Warp-style optimistic parallel execution as well as a more generalized mixed, optimistic-conservative synchronization. The engine is designed to run on massively parallel architectures with minimal overheads. A performance study of the engine is presented, including the first results to date of PDES benchmarks demonstrating scalability to as many as 16,384 processors, on an IBM Blue Gene supercomputer. The results show, for the first time, the promise of effectively sustaining very large scale discrete event execution on up to 104 processors.

  13. Fast time-reversible algorithms for molecular dynamics of rigid-body systems

    NASA Astrophysics Data System (ADS)

    Kajima, Yasuhiro; Hiyama, Miyabi; Ogata, Shuji; Kobayashi, Ryo; Tamura, Tomoyuki

    2012-06-01

    In this paper, we present time-reversible simulation algorithms for rigid bodies in the quaternion representation. By advancing a time-reversible algorithm [Y. Kajima, M. Hiyama, S. Ogata, and T. Tamura, J. Phys. Soc. Jpn. 80, 114002 (2011), 10.1143/JPSJ.80.114002] that requires iterations in calculating the angular velocity at each time step, we propose two kinds of iteration-free fast time-reversible algorithms. They are easily implemented in codes. The codes are compared with that of existing algorithms through demonstrative simulation of a nanometer-sized water droplet to find their stability of the total energy and computation speeds.

  14. Execution Time Requirements of Petri Net Programs in a Sun Workstation Environment

    DTIC Science & Technology

    1990-09-21

    Sun workstation in a reasonable time . The time for the solution of the SIMNET models was measured using two different configurations of the Sun... reasonable time . The degradation in performance with an increasing number of markings is more gradual with larger memory. A number of tables are included

  15. Three list scheduling temporal partitioning algorithm of time space characteristic analysis and compare for dynamic reconfigurable computing

    NASA Astrophysics Data System (ADS)

    Chen, Naijin

    2013-03-01

    Level Based Partitioning (LBP) algorithm, Cluster Based Partitioning (CBP) algorithm and Enhance Static List (ESL) temporal partitioning algorithm based on adjacent matrix and adjacent table are designed and implemented in this paper. Also partitioning time and memory occupation based on three algorithms are compared. Experiment results show LBP partitioning algorithm possesses the least partitioning time and better parallel character, as far as memory occupation and partitioning time are concerned, algorithms based on adjacent table have less partitioning time and less space memory occupation.

  16. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  17. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: algorithm development and flight test results

    SciTech Connect

    Knox, C.E.

    1983-03-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  18. The capability of time- and frequency-domain algorithms for bistatic SAR processing

    NASA Astrophysics Data System (ADS)

    Vu, Viet T.; Sjögren, Thomas K.; Pettersson, Mats I.

    2013-05-01

    The paper presents a study of the capability of time- and frequency-domain algorithms for bistatic SAR processing. Two typical algorithms, Bistatic Fast Backprojection (BiFBP) and Bistatic Range Doppler (BiRDA), which are both available for general bistatic geometry, are selected as the examples of time- and frequency-domain algorithms in this study. Their capability is evaluated based on some criteria such as processing time required by the algorithms to reconstruct SAR images from bistatic SAR data and the quality assessments of those SAR images.

  19. Intra-individual lap time variation of the 400-m walk, an early mobility indicator of executive function decline in high-functioning older adults?

    PubMed

    Tian, Qu; Resnick, Susan M; Ferrucci, Luigi; Studenski, Stephanie A

    2015-12-01

    Higher intra-individual lap time variation (LTV) of the 400-m walk is cross-sectionally associated with poorer attention in older adults. Whether higher LTV predicts decline in executive function and whether the relationship is accounted for by slower walking remain unanswered. The main objective of this study was to examine the relationship between baseline LTV and longitudinal change in executive function. We used data from 347 participants aged 60 years and older (50.7% female) from the Baltimore Longitudinal Study of Aging. Longitudinal assessments of executive function were conducted between 2007 and 2013, including attention (Trails A, Digit Span Forward Test), cognitive flexibility and set shifting (Trails B, Delta TMT: Trials B minus Trials A), visuoperceptual speed (Digit Symbol Substitution Test), and working memory (Digit Span Backward Test). LTV and mean lap time (MLT) were obtained from the 400-m walk test concurrent with the baseline executive function assessment. LTV was computed as variability of lap time across ten 40-m laps based on individual trajectories. A linear mixed-effects model was used to examine LTV in relation to changes in executive function, adjusted for age, sex, education, and MLT. Higher LTV was associated with greater decline in performance on Trails B (β = 4.322, p < 0.001) and delta TMT (β = 4.230, p < 0.001), independent of covariates. Findings remained largely unchanged after further adjustment for MLT. LTV was not associated with changes in other executive function measures (all p > 0.05). In high-functioning older adults, higher LTV in the 400-m walk predicts executive function decline involving cognitive flexibility and set shifting over a long period of time. High LTV may be an early indicator of executive function decline independent of MLT.

  20. Time parallelization of plasma simulations using the parareal algorithm

    SciTech Connect

    Samaddar, D.; Houlberg, Wayne A; Berry, Lee A; Elwasif, Wael R; Huysmans, G; Batchelor, Donald B

    2011-01-01

    Simulation of fusion plasmas involve a broad range of timescales. In magnetically confined plasmas, such as in ITER, the timescale associated with the microturbulence responsible for transport and confinement timescales vary by an order of 10^6 10^9. Simulating this entire range of timescales is currently impossible, even on the most powerful supercomputers available. Space parallelization has so far been the most common approach to solve partial differential equations. Space parallelization alone has led to computational saturation for fluid codes, which means that the walltime for computaion does not linearly decrease with the increasing number of processors used. The application of the parareal algorithm to simulations of fusion plasmas ushers in a new avenue of parallelization, namely temporal parallelization. The algorithm has been successfully applied to plasma turbulence simulations, prior to which it has been applied to other relatively simpler problems. This work explores the extension of the applicability of the parareal algorithm to ITER relevant problems, starting with a diffusion-convection model.

  1. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT

    PubMed Central

    Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan

    2016-01-01

    This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT’s performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition. PMID:27669265

  2. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.

    PubMed

    Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan

    2016-09-24

    This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.

  3. Polynomial-time quantum algorithms for finding the linear structures of Boolean function

    NASA Astrophysics Data System (ADS)

    Wu, WanQing; Zhang, HuanGuo; Wang, HouZhen; Mao, ShaoWu

    2015-04-01

    In this paper, we present quantum algorithms to solve the linear structures of Boolean functions. "Suppose Boolean function : is given as a black box. There exists an unknown n-bit string such that . We do not know the n-bit string , excepting the Hamming weight . Find the string ." In case , we present an efficient quantum algorithm to solve this linear construction for the general . In case , we present an efficient quantum algorithm to solve it for most cases. So, we show that the problem can be "solved nearly" in quantum polynomial times . From this view, the quantum algorithm is more efficient than any classical algorithm.

  4. Non-divergence of stochastic discrete time algorithms for PCA neural networks.

    PubMed

    Lv, Jian Cheng; Yi, Zhang; Li, Yunxia

    2015-02-01

    Learning algorithms play an important role in the practical application of neural networks based on principal component analysis, often determining the success, or otherwise, of these applications. These algorithms cannot be divergent, but it is very difficult to directly study their convergence properties, because they are described by stochastic discrete time (SDT) algorithms. This brief analyzes the original SDT algorithms directly, and derives some invariant sets that guarantee the nondivergence of these algorithms in a stochastic environment by selecting proper learning parameters. Our theoretical results are verified by a series of simulation examples.

  5. A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance

    NASA Technical Reports Server (NTRS)

    Weiss, Marc A.; Greenhall, Charles A.

    1996-01-01

    An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.

  6. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  7. A novel algorithm for real-time adaptive signal detection and identification

    SciTech Connect

    Sleefe, G.E.; Ladd, M.D.; Gallegos, D.E.; Sicking, C.W.; Erteza, I.A.

    1998-04-01

    This paper describes a novel digital signal processing algorithm for adaptively detecting and identifying signals buried in noise. The algorithm continually computes and updates the long-term statistics and spectral characteristics of the background noise. Using this noise model, a set of adaptive thresholds and matched digital filters are implemented to enhance and detect signals that are buried in the noise. The algorithm furthermore automatically suppresses coherent noise sources and adapts to time-varying signal conditions. Signal detection is performed in both the time-domain and the frequency-domain, thereby permitting the detection of both broad-band transients and narrow-band signals. The detection algorithm also provides for the computation of important signal features such as amplitude, timing, and phase information. Signal identification is achieved through a combination of frequency-domain template matching and spectral peak picking. The algorithm described herein is well suited for real-time implementation on digital signal processing hardware. This paper presents the theory of the adaptive algorithm, provides an algorithmic block diagram, and demonstrate its implementation and performance with real-world data. The computational efficiency of the algorithm is demonstrated through benchmarks on specific DSP hardware. The applications for this algorithm, which range from vibration analysis to real-time image processing, are also discussed.

  8. Fast Algorithms for Mining Co-evolving Time Series

    DTIC Science & Technology

    2011-09-01

    resolution methods : Fourier and Wavelets . . . . . . . . . . . . . . . . . . 9 2.2.4 Time series forecasting...categorical data. Our work is based on two key properties in those co-evolving time series , dynamics and correlation. Dynamics captures the temporal...applications. 2.2 A survey on time series methods There is a lot of work on time series analysis , on indexing, dimensionality reduction, forecasting

  9. Directed Incremental Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Yang, Guowei; Rungta, Neha; Khurshid, Sarfraz

    2011-01-01

    The last few years have seen a resurgence of interest in the use of symbolic execution -- a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice. In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves -- only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes.

  10. A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.

  11. Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed

    NASA Technical Reports Server (NTRS)

    Tian, Ye; Song, Qi; Cattafesta, Louis

    2005-01-01

    This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.

  12. Circadian Rhythms in Executive Function during the Transition to Adolescence: The Effect of Synchrony between Chronotype and Time of Day

    ERIC Educational Resources Information Center

    Hahn, Constanze; Cowell, Jason M.; Wiprzycka, Ursula J.; Goldstein, David; Ralph, Martin; Hasher, Lynn; Zelazo, Philip David

    2012-01-01

    To explore the influence of circadian rhythms on executive function during early adolescence, we administered a battery of executive function measures (including a Go-Nogo task, the Iowa Gambling Task, a Self-ordered Pointing task, and an Intra/Extradimensional Shift task) to Morning-preference and Evening-preference participants (N = 80) between…

  13. The differential recruitment of short-term memory and executive functions during time, number, and length perception: An individual differences approach.

    PubMed

    Ogden, Ruth S; Samuels, Michael; Simmons, Fiona; Wearden, John; Montgomery, Catharine

    2017-01-16

    Developmental, behavioural, and neurological similarities in the processing of different magnitudes (time, number, space) support the existence of a common magnitude processing system (e.g., a theory of magnitude, ATOM). It is, however, unclear whether the recruitment of wider cognitive resources (short-term memory, STM; and executive function) during magnitude processing is similar across magnitude domains or is domain specific. The current study used an individual differences approach to examine the relationship between STM, executive function, and magnitude processing. In two experiments, participants completed number, length, and duration bisection tasks to assess magnitude processing and tasks that have been shown to assess STM span and executive component processes. The results suggest that the recruitment of STM and executive resources differed for the different magnitude domains. Duration perception was associated with access, inhibition, and STM span. Length processing was associated with updating, and number processing was associated with access to semantic memory. For duration and length, greater difficulty in the magnitude judgement task resulted in more relationships to STM and executive function. It is suggested that duration perception may be more demanding of STM and executive resources because it is represented sequentially, unlike length and number which can be represented nonsequentially.

  14. Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving

    2014-02-01

    The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.

  15. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank-Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  16. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  17. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    SciTech Connect

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  18. The effects of learning a new algorithm on asymptotic accuracy and execution speed in old age: a reanalysis.

    PubMed

    Verhaeghen, P; Kliegl, R

    2000-12-01

    Time-accuracy curves were derived for 16 younger and 19 older persons who participated in a study on training in the method of loci (Baltes & Kliegl, 1992). The effects of instruction were to immediately and permanently boost asymptotic performance and initially slow down the rate of approach to the asymptote. After extensive practice, rate of approach returned to the initial fast level. Age differences were found in both asymptotic performance and rate of approach. The effects of instruction and practice, however, were similar in younger and older adults, but older adults needed 1 session of instruction more than younger adults did before the intervention showed its full effect.

  19. Design and FPGA implementation of real-time automatic image enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Dong, GuoWei; Hou, ZuoXun; Tang, Qi; Pan, Zheng; Li, Xin

    2016-11-01

    In order to improve image processing quality and boost processing rate, this paper proposes an real-time automatic image enhancement algorithm. It is based on the histogram equalization algorithm and the piecewise linear enhancement algorithm, and it calculate the relationship of the histogram and the piecewise linear function by analyzing the histogram distribution for adaptive image enhancement. Furthermore, the corresponding FPGA processing modules are designed to implement the methods. Especially, the high-performance parallel pipelined technology and inner potential parallel processing ability of the modules are paid more attention to ensure the real-time processing ability of the complete system. The simulations and the experimentations show that the algorithm is based on the design and implementation of FPGA hardware circuit less cost on hardware, high real-time performance, the good processing performance in different sceneries. The algorithm can effectively improve the image quality, and would have wide prospect on imaging processing field.

  20. Hardware acceleration based connected component labeling algorithm in real-time ATR system

    NASA Astrophysics Data System (ADS)

    Zhao, Fei; Zhang, Zhi-yong

    2013-03-01

    Aims at the requirement of real-time processing in Real-Time Automatic Target Recognition(RTATR) system, this paper presents a hardware acceleration based two-scan connected-component labeling algorithm. Conventional pixel and run based algorithm's merits are combined, in the first scan, the pixel is processed scan unit while line as label unit, label equivalences are recorded while scanning the image by pixel. Lines with provisional label are outputted as the connected component labeling result. Then the union-find algorithm is used for resolving label equivalences and finds the representative label for each provisional label after the first scan. The labels are replaced in the second scan to complete the connected-component labeling. Experiments on RTATR platform demonstrate that the hardware acceleration implementation of algorithm reaches a higher performance and efficiency and consumes few resources. The implementation of proposed algorithm can meet the demand of real-time processing, and possesses a better practicability.

  1. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1983-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.

  2. Real-time robot deliberation by compilation and monitoring of anytime algorithms

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo

    1994-01-01

    Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.

  3. An efficient algorithm for time propagation as applied to linearized augmented plane wave method

    NASA Astrophysics Data System (ADS)

    Dewhurst, J. K.; Krieger, K.; Sharma, S.; Gross, E. K. U.

    2016-12-01

    An algorithm for time propagation of the time-dependent Kohn-Sham equations is presented. The algorithm is based on dividing the Hamiltonian into small time steps and assuming that it is constant over these steps. This allows for the time-propagating Kohn-Sham wave function to be expanded in the instantaneous eigenstates of the Hamiltonian. The method is particularly efficient for basis sets which allow for a full diagonalization of the Hamiltonian matrix. One such basis is the linearized augmented plane waves. In this case we find it is sufficient to perform the evolution as a second-variational step alone, so long as sufficient number of first variational states are used. The algorithm is tested not just for non-magnetic but also for fully non-collinear magnetic systems. We show that even for delicate properties, like the magnetization density, fairly large time-step sizes can be used demonstrating the stability and efficiency of the algorithm.

  4. Towards Run-time Assurance of Advanced Propulsion Algorithms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  5. Executive dysfunction in Korsakoff's syndrome: Time to revise the DSM criteria for alcohol-induced persisting amnestic disorder?

    PubMed

    Van Oort, Roos; Kessels, Roy P C

    2009-01-01

    Objective. This study examines the profile of executive dysfunction in Korsakoff's syndrome. There is accumulating evidence of executive deficits in Korsakoff patients that may greatly affect activities of daily living. However, the DSM-IV criteria for "alcohol-induced persisting amnestic disorder" do not take this into account. In addition, existing studies have failed to determine the type of executive deficits in this syndrome. Methods. Executive functioning was assessed in 20 Korsakoff patients using the Behavioural Assessment of the Dysexecutive Syndrome (BADS), an ecologically valid neuropsychological assessment battery consisting of various subtests that assess planning, organisation, inhibition, shifting, cognitive estimation and monitoring. Results. Sixteen patients (80%) had executive deficits, i.e. impairments on at least one BADS subtest compared to a normative control group. Overall, the profile is characterized by planning deficits on unstructured tasks. Conclusions. Next to amnesia, executive deficits are a prominent characteristic of cognitive impairment in Korsakoff patients. It is argued that the new DSM criteria should consider incorporating executive dysfunction as an important feature of alcohol-induced persistent cognitive disorder.

  6. A realization of semi-global matching stereo algorithm on GPU for real-time application

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Chen, He-ping

    2011-11-01

    Real-time stereo vision systems have many applications such as automotive and robotics. According to the Middlebury Stereo Database, Semi-Global Matching (SGM) is commonly regarded as the most efficient algorithm among the top-performing stereo algorithms. Recently, most effective real-time implementations of this algorithm are based on reconfigurable hardware (FPGA). However, with the development of General-Purpose computation on Graphics Processing Unit, an effective real-time implementation on general purpose PCs can be expected. In this paper, a real-time SGM realization on Graphics Processing Unit (GPU) is introduced. CUDA, a general purpose parallel computing architecture introduced by NVIDIA in November 2006, has been used to realize the algorithm. Some important optimizations according to CUDA and Fermi (the latest architecture of NVIDA GPUs) are also introduced in this paper.

  7. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.

  8. A compensatory algorithm for the slow-down effect on constant-time-separation approaches

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    1991-01-01

    In seeking methods to improve airport capacity, the question arose as to whether an electronic display could provide information which would enable the pilot to be responsible for self-separation under instrument conditions to allow for the practical implementation of reduced separation, multiple glide path approaches. A time based, closed loop algorithm was developed and simulator validated for in-trail (one aircraft behind the other) approach and landing. The algorithm was designed to reduce the effects of approach speed reduction prior to landing for the trailing aircraft as well as the dispersion of the interarrival times. The operational task for the validation was an instrument approach to landing while following a single lead aircraft on the same approach path. The desired landing separation was 60 seconds for these approaches. An open loop algorithm, previously developed, was used as a basis for comparison. The results showed that relative to the open loop algorithm, the closed loop one could theoretically provide for a 6 pct. increase in runway throughput. Also, the use of the closed loop algorithm did not affect the path tracking performance and pilot comments indicated that the guidance from the closed loop algorithm would be acceptable from an operational standpoint. From these results, it is concluded that by using a time based, closed loop spacing algorithm, precise interarrival time intervals may be achievable with operationally acceptable pilot workload.

  9. Higher-Order, Space-Time Adaptive Finite Volume Methods: Algorithms, Analysis and Applications

    SciTech Connect

    Minion, Michael

    2014-04-29

    The four main goals outlined in the proposal for this project were: 1. Investigate the use of higher-order (in space and time) finite-volume methods for fluid flow problems. 2. Explore the embedding of iterative temporal methods within traditional block-structured AMR algorithms. 3. Develop parallel in time methods for ODEs and PDEs. 4. Work collaboratively with the Center for Computational Sciences and Engineering (CCSE) at Lawrence Berkeley National Lab towards incorporating new algorithms within existing DOE application codes.

  10. GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.

    PubMed

    Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim

    2016-08-01

    In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.

  11. Parallel algorithms for simulating continuous time Markov chains

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  12. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  13. Multiresolution constrained least-squares algorithm for direct estimation of time activity curves from dynamic ECT projection data

    NASA Astrophysics Data System (ADS)

    Maltz, Jonathan S.

    2000-06-01

    We present an algorithm which is able to reconstruct dynamic emission computed tomography (ECT) image series directly from inconsistent projection data that have been obtained using a rotating camera. By finding a reduced dimension time-activity curve (TAC) basis with which all physiologically feasible TAC's in an image may be accurately approximated, we are able to recast this large non-linear problem as one of constrained linear least squares (CLLSQ) and to reduce parameter vector dimension by a factor of 20. Implicit is the assumption that each pixel may be modeled using a single compartment model, as is typical in 99mTc teboroxime wash-in wash-out studies; and that the blood input function is known. A disadvantage of the change of basis is that TAC non-negativity is no longer ensured. As a consequence, non-negativity constraints must appear in the CLLSQ formulation. A warm-start multiresolution approach is proposed, whereby the problem is initially solved at a resolution below that finally desired. At the next iteration, the number of reconstructed pixels is increased and the solution of the lower resolution problem is then used to warm-start the estimation of the higher resolution kinetic parameters. We demonstrate the algorithm by applying it to dynamic myocardial slice phantom projection data at resolutions of 16 X 16 and 32 X 32 pixels. We find that the warm-start method employed leads to computational savings of between 2 and 4 times when compared to cold start execution times. A 20% RMS error in the reconstructed TAC's is achieved for a total number of detected sinogram counts of 1 X 105 for the 16 X 16 problem and at 1 X 106 counts for the 32 X 32 grid. These errors are 1.5 - 2 times greater than those obtained in conventional (consistent projection) SPECT imaging at similar count levels.

  14. Many roads to synchrony: Natural time scales and their algorithms

    NASA Astrophysics Data System (ADS)

    James, Ryan G.; Mahoney, John R.; Ellison, Christopher J.; Crutchfield, James P.

    2014-04-01

    We consider two important time scales—the Markov and cryptic orders—that monitor how an observer synchronizes to a finitary stochastic process. We show how to compute these orders exactly and that they are most efficiently calculated from the ɛ-machine, a process's minimal unifilar model. Surprisingly, though the Markov order is a basic concept from stochastic process theory, it is not a probabilistic property of a process. Rather, it is a topological property and, moreover, it is not computable from any finite-state model other than the ɛ-machine. Via an exhaustive survey, we close by demonstrating that infinite Markov and infinite cryptic orders are a dominant feature in the space of finite-memory processes. We draw out the roles played in statistical mechanical spin systems by these two complementary length scales.

  15. A contourlet transform based algorithm for real-time video encoding

    NASA Astrophysics Data System (ADS)

    Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris

    2012-06-01

    In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to

  16. Multiprocessor execution of functional programs

    SciTech Connect

    Goldberg, B.F.

    1988-01-01

    Functional languages have recently gained attention as vehicles for programming in a concise and elegant manner. In addition, it has been suggested that functional programming provides a natural methodology for programming multiprocessor computers. This dissertation demonstrates that multiprocessor execution of functional programs is feasible, and results in a significant reduction in their execution times. Two implementations of the functional language ALFL were built on commercially available multiprocessors. ALFL is an implementation on the Intel iPSC hypercube multiprocessor, and Buckwheat is an implementation on the Encore Multimax shared-memory multiprocessor. Each implementation includes a compiler that performs automatic decomposition of ALFL programs. The compiler is responsible for detecting the inherent parallelism in a program, and decomposing the program into a collection of tasks, called serial combinators, that can be executed in parallel. One of the primary goals of the compiler is to generate serial combinators exhibiting the coarsest granularity possibly without sacrificing useful parallelism. This dissertation describes the algorithms used by the compiler to analyze, decompose, and optimize functional programs. The abstract machine model supported by Alfalfa and Buckwheat is called heterogeneous graph reduction, which is a hybrid of graph reduction and conventional stack-oriented execution. This model supports parallelism, lazy evaluation, and higher order functions while at the same time making efficient use of the processors in the system. The Alfalfa and Buckwheat run-time systems support dynamic load balancing, interprocessor communication (if required) and storage management. A large number of experiments were performed on Alfalfa and Buckwheat for a variety of programs. The results of these experiments, as well as the conclusions drawn from them, are presented.

  17. Algorithmic recognition of anomalous time intervals in sea-level observations

    NASA Astrophysics Data System (ADS)

    Getmanov, V. G.; Gvishiani, A. D.; Kamaev, D. A.; Kornilov, A. S.

    2016-03-01

    The problem of the algorithmic recognition of anomalous time intervals in the time series of the sea-level observations conducted by the Russian Tsunami Warning Survey (RTWS) is considered. The normal and anomalous sea-level observations are described. The polyharmonic models describing the sea-level fluctuations on the short time intervals are constructed, and sea-level forecasting based on these models is suggested. The algorithm for the recognition of anomalous time intervals is developed and its work is tested on the real RTWS data.

  18. The use of knowledge-based Genetic Algorithm for starting time optimisation in a lot-bucket MRP

    NASA Astrophysics Data System (ADS)

    Ridwan, Muhammad; Purnomo, Andi

    2016-01-01

    In production planning, Material Requirement Planning (MRP) is usually developed based on time-bucket system, a period in the MRP is representing the time and usually weekly. MRP has been successfully implemented in Make To Stock (MTS) manufacturing, where production activity must be started before customer demand is received. However, to be implemented successfully in Make To Order (MTO) manufacturing, a modification is required on the conventional MRP in order to make it in line with the real situation. In MTO manufacturing, delivery schedule to the customers is defined strictly and must be fulfilled in order to increase customer satisfaction. On the other hand, company prefers to keep constant number of workers, hence production lot size should be constant as well. Since a bucket in conventional MRP system is representing time and usually weekly, hence, strict delivery schedule could not be accommodated. Fortunately, there is a modified time-bucket MRP system, called as lot-bucket MRP system that proposed by Casimir in 1999. In the lot-bucket MRP system, a bucket is representing a lot, and the lot size is preferably constant. The time to finish every lot could be varying depends on due date of lot. Starting time of a lot must be determined so that every lot has reasonable production time. So far there is no formal method to determine optimum starting time in the lot-bucket MRP system. Trial and error process usually used for it but some time, it causes several lots have very short production time and the lot-bucket MRP would be infeasible to be executed. This paper presents the use of Genetic Algorithm (GA) for optimisation of starting time in a lot-bucket MRP system. Even though GA is well known as powerful searching algorithm, however, improvement is still required in order to increase possibility of GA in finding optimum solution in shorter time. A knowledge-based system has been embedded in the proposed GA as the improvement effort, and it is proven that the

  19. Dwell time algorithm for multi-mode optimization in manufacturing large optical mirrors

    NASA Astrophysics Data System (ADS)

    Liu, Zhenyu

    2014-08-01

    CCOS (Computer Controlled Optical Surfacing) is one of the most important method to manufacture optical surface. By controlling the dwell time of a polishing tool on the mirror we can get the desired material removal. As the optical surface becoming larger, traditional CCOS method can't meet the demand that manufacturing the mirror in higher efficiency and precision. This paper presents a new method using multi-mode optimization. By calculate the dwell time map of different tool in one optimization cycle, the larger tool and the small one have complementary advantages and obtain a global optimization for multi tool and multi-processing cycles. To calculate the dwell time of different tool at the same time we use multi-mode dwell time algorithm that based on matrix calculation. With this algorithm we did simulation experiment, the result shows using multi-mode optimization algorithm can improve the efficiency maintaining good precision.

  20. Two neural network algorithms for designing optimal terminal controllers with open final time

    NASA Technical Reports Server (NTRS)

    Plumer, Edward S.

    1992-01-01

    Multilayer neural networks, trained by the backpropagation through time algorithm (BPTT), have been used successfully as state-feedback controllers for nonlinear terminal control problems. Current BPTT techniques, however, are not able to deal systematically with open final-time situations such as minimum-time problems. Two approaches which extend BPTT to open final-time problems are presented. In the first, a neural network learns a mapping from initial-state to time-to-go. In the second, the optimal number of steps for each trial run is found using a line-search. Both methods are derived using Lagrange multiplier techniques. This theoretical framework is used to demonstrate that the derived algorithms are direct extensions of forward/backward sweep methods used in N-stage optimal control. The two algorithms are tested on a Zermelo problem and the resulting trajectories compare favorably to optimal control results.

  1. TaDb: A time-aware diffusion-based recommender algorithm

    NASA Astrophysics Data System (ADS)

    Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan

    2015-02-01

    Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.

  2. Research on Prediction Model of Time Series Based on Fuzzy Theory and Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Xiao-qin, Wu

    Fuzzy theory is one of the newly adduced self-adaptive strategies,which is applied to dynamically adjust the parameters o genetic algorithms for the purpose of enhancing the performance.In this paper, the financial time series analysis and forecasting as the main case study to the theory of soft computing technology framework that focuses on the fuzzy theory and genetic algorithms(FGA) as a method of integration. the financial time series forecasting model based on fuzzy theory and genetic algorithms was built. the ShangZheng index cards as an example. The experimental results show that FGA perform s much better than BP neural network, not only in the precision, but also in the searching speed.The hybrid algorithm has a strong feasibility and superiority.

  3. Increasing lateralized motor activity in younger and older adults using Real-time fMRI during executed movements.

    PubMed

    Neyedli, Heather F; Sampaio-Baptista, Cassandra; Kirkman, Matthew A; Havard, David; Lührs, Michael; Ramsden, Katie; Flitney, David D; Clare, Stuart; Goebel, Rainer; Johansen-Berg, Heidi

    2017-02-15

    Neurofeedback training involves presenting an individual with a representation of their brain activity and instructing them to alter the activity using the feedback. One potential application of neurofeedback is for patients to alter neural activity to improve function. For example, there is evidence that greater laterality of movement-related activity is associated with better motor outcomes after stroke; so using neurofeedback to increase laterality may provide a novel route for improving outcomes. However, we must demonstrate that individuals can control relevant neurofeedback signals. Here, we performed two proof-of-concept studies, one in younger (median age: 26years) and one in older healthy volunteers (median age: 67.5years). The purpose was to determine if participants could manipulate laterality of activity between the motor cortices using real-time fMRI neurofeedback while performing simple hand movements. The younger cohort trained using their left and right hand, the older group trained using their left hand only. In both studies participants in a neurofeedback group were able to achieve more lateralized activity than those in a sham group (younger adults: F(1,23)=4.37, p<0.05; older adults: F(1,15)=9.08, p<0.01). Moreover, the younger cohort was able to maintain the lateralized activity for right hand movements once neurofeedback was removed. The older cohort did not maintain lateralized activity upon feedback removal, with the limitation being that they did not train with their right hand. The results provide evidence that neurofeedback can be used with executed movements to promote lateralized brain activity and thus is amenable for testing as a therapeutic intervention for patients following stroke.

  4. Towards Real-Time Detection of Gait Events on Different Terrains Using Time-Frequency Analysis and Peak Heuristics Algorithm

    PubMed Central

    Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin

    2016-01-01

    Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses. PMID:27706086

  5. Applications and development of new algorithms for displacement analysis using InSAR time series

    NASA Astrophysics Data System (ADS)

    Osmanoglu, Batuhan

    Time series analysis of Synthetic Aperture Radar Interferometry (InSAR) data has become an important scientific tool for monitoring and measuring the displacement of Earth's surface due to a wide range of phenomena, including earthquakes, volcanoes, landslides, changes in ground water levels, and wetlands. Time series analysis is a product of interferometric phase measurements, which become ambiguous when the observed motion is larger than half of the radar wavelength. Thus, phase observations must first be unwrapped in order to obtain physically meaningful results. Persistent Scatterer Interferometry (PSI), Stanford Method for Persistent Scatterers (StaMPS), Short Baselines Interferometry (SBAS) and Small Temporal Baseline Subset (STBAS) algorithms solve for this ambiguity using a series of spatio-temporal unwrapping algorithms and filters. In this dissertation, I improve upon current phase unwrapping algorithms, and apply the PSI method to study subsidence in Mexico City. PSI was used to obtain unwrapped deformation rates in Mexico City (Chapter 3),where ground water withdrawal in excess of natural recharge causes subsurface, clay-rich sediments to compact. This study is based on 23 satellite SAR scenes acquired between January 2004 and July 2006. Time series analysis of the data reveals a maximum line-of-sight subsidence rate of 300mm/yr at a high enough resolution that individual subsidence rates for large buildings can be determined. Differential motion and related structural damage along an elevated metro rail was evident from the results. Comparison of PSI subsidence rates with data from permanent GPS stations indicate root mean square (RMS) agreement of 6.9 mm/yr, about the level expected based on joint data uncertainty. The Mexico City results suggest negligible recharge, implying continuing degradation and loss of the aquifer in the third largest metropolitan area in the world. Chapters 4 and 5 illustrate the link between time series analysis and three

  6. An algorithm for a single machine scheduling problem with sequence dependent setup times and scheduling windows

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1975-01-01

    An enumeration algorithm is presented for solving a scheduling problem similar to the single machine job shop problem with sequence dependent setup times. The scheduling problem differs from the job shop problem in two ways. First, its objective is to select an optimum subset of the available tasks to be performed during a fixed period of time. Secondly, each task scheduled is constrained to occur within its particular scheduling window. The algorithm is currently being used to develop typical observational timelines for a telescope that will be operated in earth orbit. Computational times associated with timeline development are presented.

  7. Monte Carlo algorithm for efficient simulation of time-resolved fluorescence in layered turbid media.

    PubMed

    Liebert, A; Wabnitz, H; Zołek, N; Macdonald, R

    2008-08-18

    We present an efficient Monte Carlo algorithm for simulation of time-resolved fluorescence in a layered turbid medium. It is based on the propagation of excitation and fluorescence photon bundles and the assumption of equal reduced scattering coefficients at the excitation and emission wavelengths. In addition to distributions of times of arrival of fluorescence photons at the detector, 3-D spatial generation probabilities were calculated. The algorithm was validated by comparison with the analytical solution of the diffusion equation for time-resolved fluorescence from a homogeneous semi-infinite turbid medium. It was applied to a two-layered model mimicking intra- and extracerebral compartments of the adult human head.

  8. Executive Dysfunction

    PubMed Central

    Rabinovici, Gil D.; Stephens, Melanie L.; Possin, Katherine L.

    2015-01-01

    Purpose of Review: Executive functions represent a constellation of cognitive abilities that drive goal-oriented behavior and are critical to the ability to adapt to an ever-changing world. This article provides a clinically oriented approach to classifying, localizing, diagnosing, and treating disorders of executive function, which are pervasive in clinical practice. Recent Findings: Executive functions can be split into four distinct components: working memory, inhibition, set shifting, and fluency. These components may be differentially affected in individual patients and act together to guide higher-order cognitive constructs such as planning and organization. Specific bedside and neuropsychological tests can be applied to evaluate components of executive function. While dysexecutive syndromes were first described in patients with frontal lesions, intact executive functioning relies on distributed neural networks that include not only the prefrontal cortex, but also the parietal cortex, basal ganglia, thalamus, and cerebellum. Executive dysfunction arises from injury to any of these regions, their white matter connections, or neurotransmitter systems. Dysexecutive symptoms therefore occur in most neurodegenerative diseases and in many other neurologic, psychiatric, and systemic illnesses. Management approaches are patient specific and should focus on treatment of the underlying cause in parallel with maximizing patient function and safety via occupational therapy and rehabilitation. Summary: Executive dysfunction is extremely common in patients with neurologic disorders. Diagnosis and treatment hinge on familiarity with the clinical components and neuroanatomic correlates of these complex, high-order cognitive processes. PMID:26039846

  9. Retention Time Alignment of LC/MS Data by a Divide-and-Conquer Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongqi

    2012-04-01

    Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.

  10. Speedup properties of phases in the execution profile of distributed parallel programs

    SciTech Connect

    Carlson, B.M.; Wagner, T.D.; Dowdy, L.W.; Worley, P.H.

    1992-08-01

    The execution profile of a distributed-memory parallel program specifies the number of busy processors as a function of time. Periods of homogeneous processor utilization are manifested in many execution profiles. These periods can usually be correlated with the algorithms implemented in the underlying parallel code. Three families of methods for smoothing execution profile data are presented. These approaches simplify the problem of detecting end points of periods of homogeneous utilization. These periods, called phases, are then examined in isolation, and their speedup characteristics are explored. A specific workload executed on an Intel iPSC/860 is used for validation of the techniques described.

  11. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-03-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  12. Generic architecture for real-time multisensor fusion tracking algorithm development and evaluation

    NASA Astrophysics Data System (ADS)

    Queeney, Tom; Woods, Edward

    1994-10-01

    Westinghouse has developed and demonstrated a system for the rapid prototyping of Sensor Fusion Tracking (SFT) algorithms. The system provides an object-oriented envelope with three sets of generic software objects to aid in the development and evaluation of SFT algorithms. The first is a generic tracker model that encapsulates the idea of a tracker being a series of SFT algorithms along with the data manipulated by those algorithms and is capable of simultaneously supporting multiple, independent trackers. The second is a set of flexible, easily extensible sensor and target models which allows many types of sensors and targets to be used. Live, recorded and simulated sensors and combinations thereof can be utilized as sources for the trackers. The sensor models also provide an easily extensible interface to the generic tracker model so that all sensors provide input to the SFT algorithms in the same fashion. The third is a highly versatile display and user interface that allows easy access to many of the performance measures for sensors and trackers for easy evaluation and debugging of the SFT algorithms. The system is an object-oriented design programmed in C++. This system with several of the SFT algorithms developed for it has been used with live sensors as a real-time tracking system. This paper outlines the salient features of the sensor fusion architecture and programming environment.

  13. Sequential Insertion Heuristic with Adaptive Bee Colony Optimisation Algorithm for Vehicle Routing Problem with Time Windows.

    PubMed

    Jawarneh, Sana; Abdullah, Salwani

    2015-01-01

    This paper presents a bee colony optimisation (BCO) algorithm to tackle the vehicle routing problem with time window (VRPTW). The VRPTW involves recovering an ideal set of routes for a fleet of vehicles serving a defined number of customers. The BCO algorithm is a population-based algorithm that mimics the social communication patterns of honeybees in solving problems. The performance of the BCO algorithm is dependent on its parameters, so the online (self-adaptive) parameter tuning strategy is used to improve its effectiveness and robustness. Compared with the basic BCO, the adaptive BCO performs better. Diversification is crucial to the performance of the population-based algorithm, but the initial population in the BCO algorithm is generated using a greedy heuristic, which has insufficient diversification. Therefore the ways in which the sequential insertion heuristic (SIH) for the initial population drives the population toward improved solutions are examined. Experimental comparisons indicate that the proposed adaptive BCO-SIH algorithm works well across all instances and is able to obtain 11 best results in comparison with the best-known results in the literature when tested on Solomon's 56 VRPTW 100 customer instances. Also, a statistical test shows that there is a significant difference between the results.

  14. Sequential Insertion Heuristic with Adaptive Bee Colony Optimisation Algorithm for Vehicle Routing Problem with Time Windows

    PubMed Central

    Jawarneh, Sana; Abdullah, Salwani

    2015-01-01

    This paper presents a bee colony optimisation (BCO) algorithm to tackle the vehicle routing problem with time window (VRPTW). The VRPTW involves recovering an ideal set of routes for a fleet of vehicles serving a defined number of customers. The BCO algorithm is a population-based algorithm that mimics the social communication patterns of honeybees in solving problems. The performance of the BCO algorithm is dependent on its parameters, so the online (self-adaptive) parameter tuning strategy is used to improve its effectiveness and robustness. Compared with the basic BCO, the adaptive BCO performs better. Diversification is crucial to the performance of the population-based algorithm, but the initial population in the BCO algorithm is generated using a greedy heuristic, which has insufficient diversification. Therefore the ways in which the sequential insertion heuristic (SIH) for the initial population drives the population toward improved solutions are examined. Experimental comparisons indicate that the proposed adaptive BCO-SIH algorithm works well across all instances and is able to obtain 11 best results in comparison with the best-known results in the literature when tested on Solomon’s 56 VRPTW 100 customer instances. Also, a statistical test shows that there is a significant difference between the results. PMID:26132158

  15. Real-time Imaging Orientation Determination System to Verify Imaging Polarization Navigation Algorithm

    PubMed Central

    Lu, Hao; Zhao, Kaichun; Wang, Xiaochu; You, Zheng; Huang, Kaoli

    2016-01-01

    Bio-inspired imaging polarization navigation which can provide navigation information and is capable of sensing polarization information has advantages of high-precision and anti-interference over polarization navigation sensors that use photodiodes. Although all types of imaging polarimeters exist, they may not qualify for the research on the imaging polarization navigation algorithm. To verify the algorithm, a real-time imaging orientation determination system was designed and implemented. Essential calibration procedures for the type of system that contained camera parameter calibration and the inconsistency of complementary metal oxide semiconductor calibration were discussed, designed, and implemented. Calibration results were used to undistort and rectify the multi-camera system. An orientation determination experiment was conducted. The results indicated that the system could acquire and compute the polarized skylight images throughout the calibrations and resolve orientation by the algorithm to verify in real-time. An orientation determination algorithm based on image processing was tested on the system. The performance and properties of the algorithm were evaluated. The rate of the algorithm was over 1 Hz, the error was over 0.313°, and the population standard deviation was 0.148° without any data filter. PMID:26805851

  16. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  17. A Combination of Genetic Algorithm and Particle Swarm Optimization for Vehicle Routing Problem with Time Windows

    PubMed Central

    Xu, Sheng-Hua; Liu, Ji-Ping; Zhang, Fu-Hao; Wang, Liang; Sun, Li-Jian

    2015-01-01

    A combination of genetic algorithm and particle swarm optimization (PSO) for vehicle routing problems with time windows (VRPTW) is proposed in this paper. The improvements of the proposed algorithm include: using the particle real number encoding method to decode the route to alleviate the computation burden, applying a linear decreasing function based on the number of the iterations to provide balance between global and local exploration abilities, and integrating with the crossover operator of genetic algorithm to avoid the premature convergence and the local minimum. The experimental results show that the proposed algorithm is not only more efficient and competitive with other published results but can also obtain more optimal solutions for solving the VRPTW issue. One new well-known solution for this benchmark problem is also outlined in the following. PMID:26343655

  18. A real-time ECG data compression algorithm for a digital holter system.

    PubMed

    Lee, Sangjoon; Lee, Myoungho

    2008-01-01

    This paper describes a real time ECG compression algorithm for a digital holter system. Proposed algorithm consists of five main procedures. First procedure is to differentiate signals, second is to choose a period of the differentiated signals and store them in memory, third is to perform the DCT(Discrete Cosine Transform) on the stored data, fourth is to apply a window filter, and fifth procedure is to apply Huffman Coding compression method on the data. This developed algorithm has been tested by applying 12 ECGs(electrocardiograms) from the MIT-BIH database and the PRD(Percent RMS Difference) and the CR(Compression Ratio) are calculated. It is found that the algorithm achieved a high level of compression performance with 1.82 of PRD and 8.82:1 of CR in average.

  19. A Combination of Genetic Algorithm and Particle Swarm Optimization for Vehicle Routing Problem with Time Windows.

    PubMed

    Xu, Sheng-Hua; Liu, Ji-Ping; Zhang, Fu-Hao; Wang, Liang; Sun, Li-Jian

    2015-08-27

    A combination of genetic algorithm and particle swarm optimization (PSO) for vehicle routing problems with time windows (VRPTW) is proposed in this paper. The improvements of the proposed algorithm include: using the particle real number encoding method to decode the route to alleviate the computation burden, applying a linear decreasing function based on the number of the iterations to provide balance between global and local exploration abilities, and integrating with the crossover operator of genetic algorithm to avoid the premature convergence and the local minimum. The experimental results show that the proposed algorithm is not only more efficient and competitive with other published results but can also obtain more optimal solutions for solving the VRPTW issue. One new well-known solution for this benchmark problem is also outlined in the following.

  20. Study on algorithm and real-time implementation of infrared image processing based on FPGA

    NASA Astrophysics Data System (ADS)

    Pang, Yulin; Ding, Ruijun; Liu, Shanshan; Chen, Zhe

    2010-10-01

    With the fast development of Infrared Focal Plane Arrays (IRFPA) detectors, high quality real-time image processing becomes more important in infrared imaging system. Facing the demand of better visual effect and good performance, we find FPGA is an ideal choice of hardware to realize image processing algorithm that fully taking advantage of its high speed, high reliability and processing a great amount of data in parallel. In this paper, a new idea of dynamic linear extension algorithm is introduced, which has the function of automatically finding the proper extension range. This image enhancement algorithm is designed in Verilog HDL and realized on FPGA. It works on higher speed than serial processing device like CPU and DSP. Experiment shows that this hardware unit of dynamic linear extension algorithm enhances the visual effect of infrared image effectively.

  1. Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL) algorithm.

    PubMed

    Madan, Christopher R; Spetch, Marcia L

    2014-01-01

    When studying animal behaviour within an open environment, movement-related data are often important for behavioural analyses. Therefore, simple and efficient techniques are needed to present and analyze the data of such movements. However, it is challenging to present both spatial and temporal information of movements within a two-dimensional image representation. To address this challenge, we developed the spectral time-lapse (STL) algorithm that re-codes an animal's position at every time point with a time-specific color, and overlays it with a reference frame of the video, to produce a summary image. We additionally incorporated automated motion tracking, such that the animal's position can be extracted and summary statistics such as path length and duration can be calculated, as well as instantaneous velocity and acceleration. Here we describe the STL algorithm and offer a freely available MATLAB toolbox that implements the algorithm and allows for a large degree of end-user control and flexibility.

  2. Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Baskaran, Subbiah; Noever, D.

    1999-01-01

    Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.

  3. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  4. Executive functions.

    PubMed

    Miller, Karen J

    2005-04-01

    Executive functions are higher-order cognitive processes that continue to develop well into adulthood. They are critically important to behavioral self-control and task performance, and deficits can have serious effects on a student's functioning in many areas. Primary care pediatricians can play an important role by being aware of this evolving field of research, current assessment strategies, and by encouraging families, schools, and students to adopt a positive and problem-solving approach to improve executive functions.

  5. Fourth-order algorithms for solving the imaginary-time Gross-Pitaevskii equation in a rotating anisotropic trap

    SciTech Connect

    Chin, Siu A.; Krotscheck, Eckhard

    2005-09-01

    By implementing the exact density matrix for the rotating anisotropic harmonic trap, we derive a class of very fast and accurate fourth-order algorithms for evolving the Gross-Pitaevskii equation in imaginary time. Such fourth-order algorithms are possible only with the use of forward, positive time step factorization schemes. These fourth-order algorithms converge at time-step sizes an order-of-magnitude larger than conventional second-order algorithms. Our use of time-dependent factorization schemes provides a systematic way of devising algorithms for solving this type of nonlinear equations.

  6. Enhancing Sensitivity of a Miniature Spectrometer Using a Real-Time Image Processing Algorithm.

    PubMed

    Chandramohan, Sabarish; Avrutsky, Ivan

    2016-05-01

    A real-time image processing algorithm is developed to enhance the sensitivity of a planar single-mode waveguide miniature spectrometer with integrated waveguide gratings. A novel approach of averaging along the arcs in a curved coordinate system is introduced which allows for collecting more light, thereby enhancing the sensitivity. The algorithm is tested using CdSeS/ZnS quantum dots drop casted on the surface of a single-mode waveguide. Measurements indicate that a monolayer of quantum dots is expected to produce guided mode attenuation approximately 11 times above the noise level.

  7. The FPGA realization of a real-time Bayer image restoration algorithm with better performance

    NASA Astrophysics Data System (ADS)

    Ma, Huaping; Liu, Shuang; Zhou, Jiangyong; Tang, Zunlie; Deng, Qilin; Zhang, Hongliu

    2014-11-01

    Along with the wide usage of realizing Bayer color interpolation algorithm through FPGA, better performance, real-time processing, and less resource consumption have become the pursuits for the users. In order to realize the function of high speed and high quality processing of the Bayer image restoration with less resource consumption, the color reconstruction is designed and optimized from the interpolation algorithm and the FPGA realization in this article. Then the hardware realization is finished with FPGA development platform, and the function of real-time and high-fidelity image processing with less resource consumption is realized in the embedded image acquisition systems.

  8. Real time tracking with a silicon telescope prototype using the "artificial retina" algorithm

    NASA Astrophysics Data System (ADS)

    Abba, A.; Bedeschi, F.; Caponio, F.; Cenci, R.; Citterio, M.; Coelli, S.; Fu, J.; Geraci, A.; Grizzuti, M.; Lusardi, N.; Marino, P.; Monti, M.; Morello, M. J.; Neri, N.; Ninci, D.; Petruzzo, M.; Piucci, A.; Punzi, G.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.; Walsh, J.

    2016-07-01

    We present the first prototype of a silicon tracker using the artificial retina algorithm for fast track finding. The algorithm is inspired by the neurobiological mechanism of recognition of edges in mammals visual cortex. It is based on extensive parallelization and is implemented on commercial FPGAs allowing us to reconstruct real time tracks with offline-like quality and < 1 μs latencies. The practical device consists of a telescope with 8 single-sided silicon strip sensors and custom DAQ boards equipped with Xilinx Kintex 7 FPGAs that perform the readout of the sensors and the track reconstruction in real time.

  9. Cognitive programs: software for attention's executive

    PubMed Central

    Tsotsos, John K.; Kruijne, Wouter

    2014-01-01

    What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual

  10. A Utility Accrual Scheduling Algorithm for Real-Time Activities With Mutual Exclusion Resource Constraints

    DTIC Science & Technology

    2006-01-01

    system. Our simulation studies and implementation measurements reveal that GUS performs close to, if not better than, the existing algorithms for the...satisfying application time con­ straints. The most widely studied time constraint is the deadline. A deadline time con­ straint for an application...optimality criteria, such as resource dependencies and precedence 3 constraints. Scheduling tasks with non-step TUF’s has been studied in the past

  11. Executive functions in synesthesia.

    PubMed

    Rouw, Romke; van Driel, Joram; Knip, Koen; Richard Ridderinkhof, K

    2013-03-01

    In grapheme-color synesthesia, a number or letter can evoke two different and possibly conflicting (real and synesthetic) color sensations at the same time. In this study, we investigate the relationship between synesthesia and executive control functions. First, no general skill differences were obtained between synesthetes and non-synesthetes in classic executive control paradigms. Furthermore, classic executive control effects did not interact with synesthetic behavioral effects. Third, we found support for our hypothesis that inhibition of a synesthetic color takes effort and time. Finally, individual differences analyses showed no relationship between the two skills; performance on a 'normal' Stroop task does not predict performance on a synesthetic Stroop task. Across four studies, the current results consistently show no clear relationship between executive control functions and synesthetic behavioral effects. This raises the question of which mechanisms are at play in synesthetic 'management' during the presence of two conflicting (real and synesthetic) sensations.

  12. Serotoninergic and dopaminergic modulation of cortico-striatal circuit in executive and attention deficits induced by NMDA receptor hypofunction in the 5-choice serial reaction time task.

    PubMed

    Carli, Mirjana; Invernizzi, Roberto W

    2014-01-01

    Executive functions are an emerging propriety of neuronal processing in circuits encompassing frontal cortex and other cortical and subcortical brain regions such as basal ganglia and thalamus. Glutamate serves as the major neurotrasmitter in these circuits where glutamate receptors of NMDA type play key role. Serotonin and dopamine afferents are in position to modulate intrinsic glutamate neurotransmission along these circuits and in turn to optimize circuit performance for specific aspects of executive control over behavior. In this review, we focus on the 5-choice serial reaction time task which is able to provide various measures of attention and executive control over performance in rodents and the ability of prefrontocortical and striatal serotonin 5-HT1A, 5-HT2A, and 5-HT2C as well as dopamine D1- and D2-like receptors to modulate different aspects of executive and attention disturbances induced by NMDA receptor hypofunction in the prefrontal cortex. These behavioral studies are integrated with findings from microdialysis studies. These studies illustrate the control of attention selectivity by serotonin 5-HT1A, 5-HT2A, 5-HT2C, and dopamine D1- but not D2-like receptors and a distinct contribution of these cortical and striatal serotonin and dopamine receptors to the control of different aspects of executive control over performance such as impulsivity and compulsivity. An association between NMDA antagonist-induced increase in glutamate release in the prefrontal cortex and attention is suggested. Collectively, this review highlights the functional interaction of serotonin and dopamine with NMDA dependent glutamate neurotransmission in the cortico-striatal circuitry for specific cognitive demands and may shed some light on how dysregulation of neuronal processing in these circuits may be implicated in specific neuropsychiatric disorders.

  13. Detrending Algorithms in Large Time Series: Application to TFRM-PSES Data

    NASA Astrophysics Data System (ADS)

    del Ser, D.; Fors, O.; Núñez, J.; Voss, H.; Rosich, A.; Kouprianov, V.

    2015-07-01

    Certain instrumental effects and data reduction anomalies introduce systematic errors in photometric time series. Detrending algorithms such as the Trend Filtering Algorithm (TFA; Kovács et al. 2004) have played a key role in minimizing the effects caused by these systematics. Here we present the results obtained after applying the TFA, Savitzky & Golay (1964) detrending algorithms, and the Box Least Square phase-folding algorithm (Kovács et al. 2002) to the TFRM-PSES data (Fors et al. 2013). Tests performed on these data show that by applying these two filtering methods together the photometric RMS is on average improved by a factor of 3-4, with better efficiency towards brighter magnitudes, while applying TFA alone yields an improvement of a factor 1-2. As a result of this improvement, we are able to detect and analyze a large number of stars per TFRM-PSES field which present some kind of variability. Also, after porting these algorithms to Python and parallelizing them, we have improved, even for large data samples, the computational performance of the overall detrending+BLS algorithm by a factor of ˜10 with respect to Kovács et al. (2004).

  14. Performance of QoS-based multicast routing algorithms for real-time communication

    NASA Astrophysics Data System (ADS)

    Verma, Sanjeev; Pankaj, Rajesh K.; Leon-Garcia, Alberto

    1997-10-01

    In recent years, there has been a lot of interest in providing real-time multimedia services like digital audio and video over packet-switched networks such as Internet and ATM. These services require certain quality of service (QoS) from the network. The routing algorithm should take QoS factor for an application into account while selecting the most suitable route for the application. In this paper, we introduce a new routing metric and use it with two different heuristics to compute the multicast tree for guaranteed QoS applications that need firm end-to-end delay bound. We then compare the performance of our algorithms with the other proposed QoS-based routing algorithms. Simulations were run over a number of random networks to measure the performance of different algorithms. We studied routing algorithms along with resource reservation and admission control to measure the call throughput over a number of random networks. Simulation results show that our algorithms give a much better performance in terms of call throughput over other proposed schemes like QOSPF.

  15. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    SciTech Connect

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based and maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  16. Time-sequenced adaptive filtering using a modified P-vector algorithm

    NASA Astrophysics Data System (ADS)

    Williams, Robert L.

    1996-10-01

    An adaptive algorithm and two stage filter structure were developed for adaptive filtering of certain classes of signals that exhibit cyclostationary characteristics. The new modified P-vector algorithm (mPa) eliminates the need for a separate desired signal which is typically required by conventional adaptive algorithms. It is then implemented in a time-sequenced manner to counteract the nonstationary characteristics typically found in certain radar and bioelectromagnetic signals. Initial algorithm testing is performed on evoked responses generated by the visual cortex of the human brain with the objective, ultimately, to transition the results to radar signals. Each sample of the evoked response is modeled as the sum of three uncorrelated signal components, a time-varying mean (M), a noise component (N), and a random jitter component (Q). A two stage single channel time-sequenced adaptive filter structure was developed which improves convergence characteristics by de coupling the time-varying mean component from the `Q' and noise components in the first stage. The EEG statistics must be known a priori and are adaptively estimated from the pre stimulus data. The performance of the two stage mPa time-sequenced adaptive filter approaches the performance for the ideal case of an adaptive filter having a noiseless desired response.

  17. Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm

    SciTech Connect

    Thanh, Vo Hong; Priami, Corrado

    2015-08-07

    We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reaction rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.

  18. A Parametric Study of the Ibrahim Time Domain Modal Identification Algorithm

    NASA Technical Reports Server (NTRS)

    Pappa, R. S.; Ibrahim, S. R.

    1985-01-01

    The accuracy of the Ibrahim time Domain (ITD) identification algorithm in extracting structural model parameters from free response functions was studied using computer simulated data for 65 positions on an isotropic, uniform thickness plate with mode shapes obtained by NASTRAN analysis. Natural frequencies were used to study identification results over ranges of modal parameter values and user selectable algorithm constants. Effects of superimposing various levels of noise onto the functions were investigated. No detrimental effects were observed when the number of computational degrees of freedom allowed in the algorithm was made many times larger than the minimum necessary for adequate identification. The use of a high number of degrees of freedom when analyzing experimental data, for the simultaneous identification of many modes in one computer run are suggested.

  19. A Linear Time Algorithm for the Minimum Spanning Caterpillar Problem for Bounded Treewidth Graphs

    NASA Astrophysics Data System (ADS)

    Dinneen, Michael J.; Khosravani, Masoud

    We consider the Minimum Spanning Caterpillar Problem (MSCP) in a graph where each edge has two costs, spine (path) cost and leaf cost, depending on whether it is used as a spine or a leaf edge. The goal is to find a spanning caterpillar in which the sum of its edge costs is the minimum. We show that the problem has a linear time algorithm when a tree decomposition of the graph is given as part of the input. Despite the fast growing constant factor of the time complexity of our algorithm, it is still practical and efficient for some classes of graphs, such as outerplanar, series-parallel (K 4 minor-free), and Halin graphs. We also briefly explain how one can modify our algorithm to solve the Minimum Spanning Ring Star and the Dual Cost Minimum Spanning Tree Problems.

  20. A Two-Phase Time Synchronization-Free Localization Algorithm for Underwater Sensor Networks.

    PubMed

    Luo, Junhai; Fan, Liying

    2017-03-30

    Underwater Sensor Networks (UWSNs) can enable a broad range of applications such as resource monitoring, disaster prevention, and navigation-assistance. Sensor nodes location in UWSNs is an especially relevant topic. Global Positioning System (GPS) information is not suitable for use in UWSNs because of the underwater propagation problems. Hence, some localization algorithms based on the precise time synchronization between sensor nodes that have been proposed for UWSNs are not feasible. In this paper, we propose a localization algorithm called Two-Phase Time Synchronization-Free Localization Algorithm (TP-TSFLA). TP-TSFLA contains two phases, namely, range-based estimation phase and range-free evaluation phase. In the first phase, we address a time synchronization-free localization scheme based on the Particle Swarm Optimization (PSO) algorithm to obtain the coordinates of the unknown sensor nodes. In the second phase, we propose a Circle-based Range-Free Localization Algorithm (CRFLA) to locate the unlocalized sensor nodes which cannot obtain the location information through the first phase. In the second phase, sensor nodes which are localized in the first phase act as the new anchor nodes to help realize localization. Hence, in this algorithm, we use a small number of mobile beacons to help obtain the location information without any other anchor nodes. Besides, to improve the precision of the range-free method, an extension of CRFLA achieved by designing a coordinate adjustment scheme is updated. The simulation results show that TP-TSFLA can achieve a relative high localization ratio without time synchronization.

  1. [A study for time-history waveform synthesis of algorithm in shock response spectrum (SRS)].

    PubMed

    Liu, Hong-ying; Ma, Ai-jun

    2002-12-01

    Objective. To present an effective on-line SRS time-history waveform synthesis method for simulating pyrotechnic shock environment with electrodynamic shakers. Method. A procedure was developed for synthesizing a SRS time-history waveform according to a general principle. The effect of three main parameters to waveform's shape, amplitude of acceleration and duration were investigated. A modification method of SRS's amplitude and an optimal algorithm of time-history waveform were presented. Result. The algorithm was used to generate a time-history waveform that could satisfy SRS's accuracy requirement and electrodynamic shaker's acceleration limitation. Conclusion. The numerical example indicates that the developed method is effective. The synthesized time-history waveform can be used to simulate pyrotechnic shock environment using electrodynamic shakers.

  2. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  3. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  4. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    NASA Technical Reports Server (NTRS)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  5. Maximum Principles and Application to the Analysis of An Explicit Time Marching Algorithm

    NASA Technical Reports Server (NTRS)

    LeTallec, Patrick; Tidriri, Moulay D.

    1996-01-01

    In this paper we develop local and global estimates for the solution of convection-diffusion problems. We then study the convergence properties of a Time Marching Algorithm solving Advection-Diffusion problems on two domains using incompatible discretizations. This study is based on a De-Giorgi-Nash maximum principle.

  6. Image/Time Series Mining Algorithms: Applications to Developmental Biology, Document Processing and Data Streams

    ERIC Educational Resources Information Center

    Tataw, Oben Moses

    2013-01-01

    Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…

  7. Validation of Learning Effort Algorithm for Real-Time Non-Interfering Based Diagnostic Technique

    ERIC Educational Resources Information Center

    Hsu, Pi-Shan; Chang, Te-Jeng

    2011-01-01

    The objective of this research is to validate the algorithm of learning effort which is an indicator of a new real-time and non-interfering based diagnostic technique. IC3 Mentor, the adaptive e-learning platform fulfilling the requirements of intelligent tutor system, was applied to 165 university students. The learning records of the subjects…

  8. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  9. Efficient Fourier-based algorithms for time-periodic unsteady problems

    NASA Astrophysics Data System (ADS)

    Gopinath, Arathi Kamath

    2007-12-01

    This dissertation work proposes two algorithms for the simulation of time-periodic unsteady problems via the solution of Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations. These algorithms use a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). In contrast to conventional Fourier-based techniques which solve the governing equations in frequency space, the new algorithms perform all the calculations in the time domain, and hence require minimal modifications to an existing solver. The complete space-time solution is obtained by iterating in a fifth pseudo-time dimension. Various time-periodic problems such as helicopter rotors, wind turbines, turbomachinery and flapping-wings can be simulated using the Time Spectral method. The algorithm is first validated using pitching airfoil/wing test cases. The method is further extended to turbomachinery problems, and computational results verified by comparison with a time-accurate calculation. The technique can be very memory intensive for large problems, since the solution is computed (and hence stored) simultaneously at all time levels. Often, the blade counts of a turbomachine are rescaled such that a periodic fraction of the annulus can be solved. This approximation enables the solution to be obtained at a fraction of the cost of a full-scale time-accurate solution. For a viscous computation over a three-dimensional single-stage rescaled compressor, an order of magnitude savings is achieved. The second algorithm, the reduced-order Harmonic Balance method is applicable only to turbomachinery flows, and offers even larger computational savings than the Time Spectral method. It simulates the true geometry of the turbomachine using only one blade passage per blade row as the computational domain. In each blade row of the turbomachine, only the dominant frequencies are resolved, namely

  10. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  11. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  12. Dynamic acoustics for the STAR-100. [computer algorithms for time dependent sound waves in jet

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Turkel, E.

    1979-01-01

    An algorithm is described to compute time dependent acoustic waves in a jet. The method differs from previous methods in that no harmonic time dependence is assumed, thus permitting the study of nonharmonic acoustical behavior. Large grids are required to resolve the acoustic waves. Since the problem is nonstiff, explicit high order schemes can be used. These have been adapted to the STAR-100 with great efficiencies and permitted the efficient solution of problems which would not be feasible on a scalar machine.

  13. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  14. Replacing sedentary time with sleep, light, or moderate-to-vigorous physical activity: effects on self-regulation and executive functioning.

    PubMed

    Fanning, J; Porter, G; Awick, E A; Ehlers, D K; Roberts, S A; Cooke, G; Burzynska, A Z; Voss, M W; Kramer, A F; McAuley, E

    2017-04-01

    Recent attention has highlighted the importance of reducing sedentary time for maintaining health and quality of life. However, it is unclear how changing sedentary behavior may influence executive functions and self-regulatory strategy use, which are vital for the long-term maintenance of a health behavior regimen. The purpose of this cross-sectional study is to examine the estimated self-regulatory and executive functioning effects of substituting 30 min of sedentary behavior with 30 min of light activity, moderate-to-vigorous physical activity (MVPA), or sleep in a sample of older adults. This study reports baseline data collected from low-active healthy older adults (N = 247, mean age 65.4 ± 4.6 years) recruited to participate in a 6 month randomized controlled exercise trial examining the effects of various modes of exercise on brain health and function. Each participant completed assessments of physical activity self-regulatory strategy use (i.e., self-monitoring, goal-setting, social support, reinforcement, time management, and relapse prevention) and executive functioning. Physical activity and sedentary behaviors were measured using accelerometers during waking hours for seven consecutive days at each time point. Isotemporal substitution analyses were conducted to examine the effect on self-regulation and executive functioning should an individual substitute sedentary time with light activity, MVPA, or sleep. The substitution of sedentary time with both sleep and MVPA influenced both self-regulatory strategy use and executive functioning. Sleep was associated with greater self-monitoring (B = .23, p = .02), goal-setting (B = .32, p < .01), and social support (B = .18, p = .01) behaviors. Substitution of sedentary time with MVPA was associated with higher accuracy on 2-item (B = .03, p = .01) and 3-item (B = .02, p = .04) spatial working memory tasks, and with faster reaction times on single (B = -23.12, p = .03) and mixed

  15. Online learning algorithm for time series forecasting suitable for low cost wireless sensor networks nodes.

    PubMed

    Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma

    2015-04-21

    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.

  16. Closed form and geometric algorithms for real-time control of an avatar

    SciTech Connect

    Semwall, S.K.; Hightower, R.; Stansfield, S.

    1995-12-31

    In a virtual environment with multiple participants, it is necessary that the user`s actions be replicated by synthetic human forms. Whole body digitizers would be the most realistic solution for capturing the individual participant`s human form, however the best of the digitizers available are not interactive and are therefore not suitable for real-time interaction. Usually, a limited number of sensors are used as constraints on the synthetic human form. Inverse kinematics algorithms are applied to satisfy these sensor constraints. These algorithms result in slower interaction because of their iterative nature, especially when there are a large number of participants. To support real-time interaction in a virtual environment, there is a need to generate closed for solutions and fast searching algorithms. In this paper, a new closed form solution for the arms (and legs) is developed using two magnetic sensors. In developing this solution, we use the biomechanical relationship between the lower arm and the upper arm to provide an analytical, non-iterative solution, We have also outlined a solution for the whole human body by using up to ten magnetic sensors to break the human skeleton into smaller kinematic chains. In developing our algorithms, we use the knowledge of natural body postures to generate faster solutions for real-time interaction.

  17. Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes

    PubMed Central

    Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma

    2015-01-01

    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources. PMID:25905698

  18. A real-time phoneme counting algorithm and application for speech rate monitoring.

    PubMed

    Aharonson, Vered; Aharonson, Eran; Raichlin-Levi, Katia; Sotzianu, Aviv; Amir, Ofer; Ovadia-Blechman, Zehava

    2017-03-01

    Adults who stutter can learn to control and improve their speech fluency by modifying their speaking rate. Existing speech therapy technologies can assist this practice by monitoring speaking rate and providing feedback to the patient, but cannot provide an accurate, quantitative measurement of speaking rate. Moreover, most technologies are too complex and costly to be used for home practice. We developed an algorithm and a smartphone application that monitor a patient's speaking rate in real time and provide user-friendly feedback to both patient and therapist. Our speaking rate computation is performed by a phoneme counting algorithm which implements spectral transition measure extraction to estimate phoneme boundaries. The algorithm is implemented in real time in a mobile application that presents its results in a user-friendly interface. The application incorporates two modes: one provides the patient with visual feedback of his/her speech rate for self-practice and another provides the speech therapist with recordings, speech rate analysis and tools to manage the patient's practice. The algorithm's phoneme counting accuracy was validated on ten healthy subjects who read a paragraph at slow, normal and fast paces, and was compared to manual counting of speech experts. Test-retest and intra-counter reliability were assessed. Preliminary results indicate differences of -4% to 11% between automatic and human phoneme counting. Differences were largest for slow speech. The application can thus provide reliable, user-friendly, real-time feedback for speaking rate control practice.

  19. An efficient and robust algorithm for two dimensional time dependent incompressible Navier-Stokes equations: High Reynolds number flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1991-01-01

    An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.

  20. Derivation and Testing of Computer Algorithms for Automatic Real-Time Determination of Space Vehicle Potentials in Various Plasma Environments

    DTIC Science & Technology

    1988-05-31

    COMPUTER ALGORITHMS FOR AUTOMATIC REAL-TIME DETERMINATION OF SPACE VEHICLE POTENTIALS IN VARIOUS PLASMA ENVIRONMENTS May 31, 1988 Stanley L. Spiegel...crrnaion DiviSiofl 838 12 2 DERIVATION AND TESTING OF COMPUTER ALGORITHMS FOR AUTOMATIC REAL-TIME DETERMINATION OF SPACE VEHICLE POTENTIALS IN VARIOUS...S.L., "Derivation and testing of computer algorithms for automatic real time determination of space vehicle poteuatials in various plasma

  1. A modular low-complexity ECG delineation algorithm for real-time embedded systems.

    PubMed

    Bote, Jose Manuel; Recas, Joaquin; Rincon, Francisco; Atienza, David; Hermida, Roman

    2017-02-17

    This work presents a new modular and lowcomplexity algorithm for the delineation of the different ECG waves (QRS, P and T peaks, onsets and end). Involving a reduced number of operations per second and having a small memory footprint, this algorithm is intended to perform realtime delineation on resource-constrained embedded systems. The modular design allows the algorithm to automatically adjust the delineation quality in run time to a wide range of modes and sampling rates, from a Ultra-low power mode when no arrhythmia is detected, in which the ECG is sampled at low frequency, to a complete High-accuracy delineation mode in which the ECG is sampled at high frequency and all the ECG fiducial points are detected, in case of arrhythmia. The delineation algorithm has been adjusted using the QT database, providing very high sensitivity and positive predictivity, and validated with the MIT database. The errors in the delineation of all the fiducial points are below the tolerances given by the Common Standards for Electrocardiography (CSE) committee in the High-accuracy mode, except for the P wave onset, for which the algorithm is above the agreed tolerances by only a fraction of the sample duration. The computational load for the ultra-low-power 8-MHz TI MSP430 series microcontroller ranges from 0.2 to 8.5% according to the mode used.

  2. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    PubMed Central

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  3. A linear-time algorithm for Gaussian and non-Gaussian trait evolution models.

    PubMed

    Ho, Lam si Tung; Ané, Cécile

    2014-05-01

    We developed a linear-time algorithm applicable to a large class of trait evolution models, for efficient likelihood calculations and parameter inference on very large trees. Our algorithm solves the traditional computational burden associated with two key terms, namely the determinant of the phylogenetic covariance matrix V and quadratic products involving the inverse of V. Applications include Gaussian models such as Brownian motion-derived models like Pagel's lambda, kappa, delta, and the early-burst model; Ornstein-Uhlenbeck models to account for natural selection with possibly varying selection parameters along the tree; as well as non-Gaussian models such as phylogenetic logistic regression, phylogenetic Poisson regression, and phylogenetic generalized linear mixed models. Outside of phylogenetic regression, our algorithm also applies to phylogenetic principal component analysis, phylogenetic discriminant analysis or phylogenetic prediction. The computational gain opens up new avenues for complex models or extensive resampling procedures on very large trees. We identify the class of models that our algorithm can handle as all models whose covariance matrix has a 3-point structure. We further show that this structure uniquely identifies a rooted tree whose branch lengths parametrize the trait covariance matrix, which acts as a similarity matrix. The new algorithm is implemented in the R package phylolm, including functions for phylogenetic linear regression and phylogenetic logistic regression.

  4. Rapid prototyping of update algorithm of discrete Fourier transform for real-time signal processing

    NASA Astrophysics Data System (ADS)

    Kakad, Yogendra P.; Sherlock, Barry G.; Chatapuram, Krishnan V.; Bishop, Stephen

    2001-10-01

    An algorithm is developed in the companion paper, to update the existing DFT to represent the new data series that results when a new signal point is received. Updating the DFT in this way uses less computation than directly evaluating the DFT using the FFT algorithm, This reduces the computational order by a factor of log2 N. The algorithm is able to work in the presence of data window function, for use with rectangular window, the split triangular, Hanning, Hamming, and Blackman windows. In this paper, a hardware implementation of this algorithm, using FPGA technology, is outlined. Unlike traditional fully customized VLSI circuits, FPGAs represent a technical break through in the corresponding industry. The FPGA implements thousands of gates of logic in a single IC chip and it can be programmed by users at their site in a few seconds or less depending on the type of device used. The risk is low and the development time is short. The advantages have made FPGAs very popular for rapid prototyping of algorithms in the area of digital communication, digital signal processing, and image processing. Our paper addresses the related issues of implementation using hardware descriptive language in the development of the design and the subsequent downloading on the programmable hardware chip.

  5. Performance of a wavelength-diversified FSO tracking algorithm for real-time battlefield communications

    NASA Astrophysics Data System (ADS)

    Al-Akkoumi, Mouhammad K.; Harris, Alan; Huck, Robert C.; Sluss, James J., Jr.; Giuma, Tayeb A.

    2008-02-01

    Free-space optical (FSO) communications links are envisioned as a viable option for the provision of temporary high-bandwidth communication links between moving platforms, especially for deployment in battlefield situations. For successful deployment in such real-time environments, fast and accurate alignment and tracking of the FSO equipment is essential. In this paper, a two-wavelength diversity scheme using 1.55 μm and 10 μm is investigated in conjunction with a previously described tracking algorithm to maintain line-of-sight connectivity battlefield scenarios. An analytical model of a mobile FSO communications link is described. Following the analytical model, simulation results are presented for an FSO link between an unmanned aerial surveillance vehicle, the Global Hawk, with a mobile ground vehicle, an M1 Abrams Main Battle Tank. The scenario is analyzed under varying weather conditions to verify continuous connectivity is available through the tracking algorithm. Simulation results are generated to describe the performance of the tracking algorithm with respect to both received optical power levels and variations in beam divergence. Advances to any proposed tracking algorithm due to these power and divergence variations are described for future tracking algorithm development.

  6. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.

    PubMed

    Vestergaard, Christian L; Génois, Mathieu

    2015-10-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.

  7. Real-time implementation of a multispectral mine target detection algorithm

    NASA Astrophysics Data System (ADS)

    Samson, Joseph W.; Witter, Lester J.; Kenton, Arthur C.; Holloway, John H., Jr.

    2003-09-01

    Spatial-spectral anomaly detection (the "RX Algorithm") has been exploited on the USMC's Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) and several associated technology base studies, and has been found to be a useful method for the automated detection of surface-emplaced antitank land mines in airborne multispectral imagery. RX is a complex image processing algorithm that involves the direct spatial convolution of a target/background mask template over each multispectral image, coupled with a spatially variant background spectral covariance matrix estimation and inversion. The RX throughput on the ATD was about 38X real time using a single Sun UltraSparc system. A goal to demonstrate RX in real-time was begun in FY01. We now report the development and demonstration of a Field Programmable Gate Array (FPGA) solution that achieves a real-time implementation of the RX algorithm at video rates using COBRA ATD data. The approach uses an Annapolis Microsystems Firebird PMC card containing a Xilinx XCV2000E FPGA with over 2,500,000 logic gates and 18MBytes of memory. A prototype system was configured using a Tek Microsystems VME board with dual-PowerPC G4 processors and two PMC slots. The RX algorithm was translated from its C programming implementation into the VHDL language and synthesized into gates that were loaded into the FPGA. The VHDL/synthesizer approach allows key RX parameters to be quickly changed and a new implementation automatically generated. Reprogramming the FPGA is done rapidly and in-circuit. Implementation of the RX algorithm in a single FPGA is a major first step toward achieving real-time land mine detection.

  8. A Scheduling Algorithm for Minimizing Exclusive Window Durations in Time-Triggered Controller Area Network

    NASA Astrophysics Data System (ADS)

    Ryu, Minsoo

    Time-Triggered Controller Area Network is widely accepted as a viable solution for real-time communication systems such as in-vehicle communications. However, although TTCAN has been designed to support both periodic and sporadic real-time messages, previous studies mostly focused on providing deterministic real-time guarantees for periodic messages while barely addressing the performance issue of sporadic messages. In this paper, we present an O(n2) scheduling algorithm that can minimize the maximum duration of exclusive windows occupied by periodic messages, thereby minimizing the worst-case scheduling delays experienced by sporadic messages.

  9. A genetic algorithm for dynamic inbound ordering and outbound dispatching problem with delivery time windows

    NASA Astrophysics Data System (ADS)

    Kim, Byung Soo; Lee, Woon-Seek; Koh, Shiegheun

    2012-07-01

    This article considers an inbound ordering and outbound dispatching problem for a single product in a third-party warehouse, where the demands are dynamic over a discrete and finite time horizon, and moreover, each demand has a time window in which it must be satisfied. Replenishing orders are shipped in containers and the freight cost is proportional to the number of containers used. The problem is classified into two cases, i.e. non-split demand case and split demand case, and a mathematical model for each case is presented. An in-depth analysis of the models shows that they are very complicated and difficult to find optimal solutions as the problem size becomes large. Therefore, genetic algorithm (GA) based heuristic approaches are designed to solve the problems in a reasonable time. To validate and evaluate the algorithms, finally, some computational experiments are conducted.

  10. Iris unwrapping using the Bresenham circle algorithm for real-time iris recognition

    NASA Astrophysics Data System (ADS)

    Carothers, Matthew T.; Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.

    2015-02-01

    An efficient parallel architecture design for the iris unwrapping process in a real-time iris recognition system using the Bresenham Circle Algorithm is presented in this paper. Based on the characteristics of the model parameters this algorithm was chosen over the widely used polar conversion technique as the iris unwrapping model. The architecture design is parallelized to increase the throughput of the system and is suitable for processing an inputted image size of 320 × 240 pixels in real-time using Field Programmable Gate Array (FPGA) technology. Quartus software is used to implement, verify, and analyze the design's performance using the VHSIC Hardware Description Language. The system's predicted processing time is faster than the modern iris unwrapping technique used today∗.

  11. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    PubMed Central

    Yurtkuran, Alkın

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834

  12. Efficient constraint handling in electromagnetism-like algorithm for traveling salesman problem with time windows.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.

  13. A fast density-based clustering algorithm for real-time Internet of Things stream.

    PubMed

    Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.

  14. Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas

    SciTech Connect

    Cohen, B I; Dimits, A; Friedman, A; Caflisch, R

    2009-10-29

    The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.

  15. A Real-Time Algorithm for the Approximation of Level-Set-Based Curve Evolution

    PubMed Central

    Shi, Yonggang; Karl, William Clem

    2010-01-01

    In this paper, we present a complete and practical algorithm for the approximation of level-set-based curve evolution suitable for real-time implementation. In particular, we propose a two-cycle algorithm to approximate level-set-based curve evolution without the need of solving partial differential equations (PDEs). Our algorithm is applicable to a broad class of evolution speeds that can be viewed as composed of a data-dependent term and a curve smoothness regularization term. We achieve curve evolution corresponding to such evolution speeds by separating the evolution process into two different cycles: one cycle for the data-dependent term and a second cycle for the smoothness regularization. The smoothing term is derived from a Gaussian filtering process. In both cycles, the evolution is realized through a simple element switching mechanism between two linked lists, that implicitly represents the curve using an integer valued level-set function. By careful construction, all the key evolution steps require only integer operations. A consequence is that we obtain significant computation speedups compared to exact PDE-based approaches while obtaining excellent agreement with these methods for problems of practical engineering interest. In particular, the resulting algorithm is fast enough for use in real-time video processing applications, which we demonstrate through several image segmentation and video tracking experiments. PMID:18390371

  16. The design and hardware implementation of a low-power real-time seizure detection algorithm.

    PubMed

    Raghunathan, Shriram; Gupta, Sumeet K; Ward, Matthew P; Worth, Robert M; Roy, Kaushik; Irazoqui, Pedro P

    2009-10-01

    Epilepsy affects more than 1% of the world's population. Responsive neurostimulation is emerging as an alternative therapy for the 30% of the epileptic patient population that does not benefit from pharmacological treatment. Efficient seizure detection algorithms will enable closed-loop epilepsy prostheses by stimulating the epileptogenic focus within an early onset window. Critically, this is expected to reduce neuronal desensitization over time and lead to longer-term device efficacy. This work presents a novel event-based seizure detection algorithm along with a low-power digital circuit implementation. Hippocampal depth-electrode recordings from six kainate-treated rats are used to validate the algorithm and hardware performance in this preliminary study. The design process illustrates crucial trade-offs in translating mathematical models into hardware implementations and validates statistical optimizations made with empirical data analyses on results obtained using a real-time functioning hardware prototype. Using quantitatively predicted thresholds from the depth-electrode recordings, the auto-updating algorithm performs with an average sensitivity and selectivity of 95.3 +/- 0.02% and 88.9 +/- 0.01% (mean +/- SE(alpha = 0.05)), respectively, on untrained data with a detection delay of 8.5 s [5.97, 11.04] from electrographic onset. The hardware implementation is shown feasible using CMOS circuits consuming under 350 nW of power from a 250 mV supply voltage from simulations on the MIT 180 nm SOI process.

  17. Time-accurate unstructured grid algorithms for the compressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Okong'o, Nora Anyango

    Unstructured grid algorithms for the solution of the finite volume form of the unsteady compressible Navier-Stokes equations have been developed. The algorithms employ triangular cells in two-dimensions and tetrahedral cells in three-dimensions. Cell-averaged values are stored at the centroid of each cell, in a cell-centered storage scheme. Inviscid flux computations are performed by applying a Riemann solver across each face, the values at the points on the faces being obtained by function reconstruction from the cell-averaged values. The viscous fluxes and heat transfer are obtained by application of Gauss' theorem. The first unstructured grid algorithm is a two-dimensional implicit algorithm for laminar flows. Tests using flow into a supersonic compression comer showed that preconditioning in the iterative linear solver dramatically reduced the CPU time. Computations were then performed for a NACA0012 airfoil pitching about the quarter-chord at a freestream Mach number Minfinity=0.2 and Reynolds numbers Rec=104 and 2 x 104 at a dimensionless pitching rate W+o=0.2 . The results for Rec=104 are in excellent agreement with previous computations using an explicit unstructured Navier-Stokes algorithm. New results for Rec=2x104 indicate that the principal effect of increasing Reynolds number is to reduce the angle at which the primary recirculation region appears, and to cause it to form closer to the leading edge. This trend, confirmed by a grid refinement study, is consistent with previous results obtained at Minfinity=0.5 . The second unstructured grid algorithm is a three-dimensional explicit algorithm for turbulent flows. Function reconstruction via a least squares method capable of second- or third-order accuracy was implemented. Tests on the nonlinear propagation of an acoustic wave showed improved accuracy using third-order schemes but a substantial CPU-time cost. However, the second-order least squares is more accurate than the previous second-order scheme

  18. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  19. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  20. Multiple-Time-Series Clinical Data Processing for Classification With Merging Algorithm and Statistical Measures.

    PubMed

    Tseng, Yi-Ju; Ping, Xiao-Ou; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei

    2015-05-01

    A description of patient conditions should consist of the changes in and combination of clinical measures. Traditional data-processing method and classification algorithms might cause clinical information to disappear and reduce prediction performance. To improve the accuracy of clinical-outcome prediction by using multiple measurements, a new multiple-time-series data-processing algorithm with period merging is proposed. Clinical data from 83 hepatocellular carcinoma (HCC) patients were used in this research. Their clinical reports from a defined period were merged using the proposed merging algorithm, and statistical measures were also calculated. After data processing, multiple measurements support vector machine (MMSVM) with radial basis function (RBF) kernels was used as a classification method to predict HCC recurrence. A multiple measurements random forest regression (MMRF) was also used as an additional evaluation/classification method. To evaluate the data-merging algorithm, the performance of prediction using processed multiple measurements was compared to prediction using single measurements. The results of recurrence prediction by MMSVM with RBF using multiple measurements and a period of 120 days (accuracy 0.771, balanced accuracy 0.603) were optimal, and their superiority to the results obtained using single measurements was statistically significant (accuracy 0.626, balanced accuracy 0.459, P < 0.01). In the cases of MMRF, the prediction results obtained after applying the proposed merging algorithm were also better than single-measurement results (P < 0.05). The results show that the performance of HCC-recurrence prediction was significantly improved when the proposed data-processing algorithm was used, and that multiple measurements could be of greater value than single.

  1. Improving Computational Efficiency of Model Predictive Control Genetic Algorithms for Real-Time Decision Support

    NASA Astrophysics Data System (ADS)

    Minsker, B. S.; Zimmer, A. L.; Ostfeld, A.; Schmidt, A.

    2014-12-01

    Enabling real-time decision support, particularly under conditions of uncertainty, requires computationally efficient algorithms that can rapidly generate recommendations. In this paper, a suite of model predictive control (MPC) genetic algorithms are developed and tested offline to explore their value for reducing CSOs during real-time use in a deep-tunnel sewer system. MPC approaches include the micro-GA, the probability-based compact GA, and domain-specific GA methods that reduce the number of decision variable values analyzed within the sewer hydraulic model, thus reducing algorithm search space. Minimum fitness and constraint values achieved by all GA approaches, as well as computational times required to reach the minimum values, are compared to large population sizes with long convergence times. Optimization results for a subset of the Chicago combined sewer system indicate that genetic algorithm variations with coarse decision variable representation, eventually transitioning to the entire range of decision variable values, are most efficient at addressing the CSO control problem. Although diversity-enhancing micro-GAs evaluate a larger search space and exhibit shorter convergence times, these representations do not reach minimum fitness and constraint values. The domain-specific GAs prove to be the most efficient and are used to test CSO sensitivity to energy costs, CSO penalties, and pressurization constraint values. The results show that CSO volumes are highly dependent on the tunnel pressurization constraint, with reductions of 13% to 77% possible with less conservative operational strategies. Because current management practices may not account for varying costs at CSO locations and electricity rate changes in the summer and winter, the sensitivity of the results is evaluated for variable seasonal and diurnal CSO penalty costs and electricity-related system maintenance costs, as well as different sluice gate constraint levels. These findings indicate

  2. A multiple time stepping algorithm for efficient multiscale modeling of platelets flowing in blood plasma

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny

    2015-03-01

    We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3-4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations.

  3. A Multiple Time Stepping Algorithm for Efficient Multiscale Modeling of Platelets Flowing in Blood Plasma.

    PubMed

    Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny

    2015-03-01

    We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3-4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations.

  4. A Multiple Time Stepping Algorithm for Efficient Multiscale Modeling of Platelets Flowing in Blood Plasma

    PubMed Central

    Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny

    2015-01-01

    We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3–4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations. PMID:25641983

  5. A lightweight messaging-based distributed processing and workflow execution framework for real-time and big data analysis

    NASA Astrophysics Data System (ADS)

    Laban, Shaban; El-Desouky, Aly

    2014-05-01

    To achieve a rapid, simple and reliable parallel processing of different types of tasks and big data processing on any compute cluster, a lightweight messaging-based distributed applications processing and workflow execution framework model is proposed. The framework is based on Apache ActiveMQ and Simple (or Streaming) Text Oriented Message Protocol (STOMP). ActiveMQ , a popular and powerful open source persistence messaging and integration patterns server with scheduler capabilities, acts as a message broker in the framework. STOMP provides an interoperable wire format that allows framework programs to talk and interact between each other and ActiveMQ easily. In order to efficiently use the message broker a unified message and topic naming pattern is utilized to achieve the required operation. Only three Python programs and simple library, used to unify and simplify the implementation of activeMQ and STOMP protocol, are needed to use the framework. A watchdog program is used to monitor, remove, add, start and stop any machine and/or its different tasks when necessary. For every machine a dedicated one and only one zoo keeper program is used to start different functions or tasks, stompShell program, needed for executing the user required workflow. The stompShell instances are used to execute any workflow jobs based on received message. A well-defined, simple and flexible message structure, based on JavaScript Object Notation (JSON), is used to build any complex workflow systems. Also, JSON format is used in configuration, communication between machines and programs. The framework is platform independent. Although, the framework is built using Python the actual workflow programs or jobs can be implemented by any programming language. The generic framework can be used in small national data centres for processing seismological and radionuclide data received from the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear

  6. The Semi-implicit Time-stepping Algorithm in MH4D

    NASA Astrophysics Data System (ADS)

    Vadlamani, Srinath; Shumlak, Uri; Marklin, George; Meier, Eric; Lionello, Roberto

    2006-10-01

    The Plasma Science and Innovation Center (PSI Center) at the University of Washington is developing MHD codes to accurately model Emerging Concept (EC) devices. Examination of the semi-implicit time stepping algorithm implemented in the tetrahedral mesh MHD simulation code, MH4D, is presented. The time steps for standard explicit methods, which are constrained by the Courant-Friedrichs-Lewy (CFL) condition, are typically small for simulations of EC experiments due to the large Alfven speed. The CFL constraint is more severe with a tetrahedral mesh because of the irregular cell geometry. The semi-implicit algorithm [1] removes the fast waves constraint, thus allowing for larger time steps. We will present the implementation method of this algorithm, and numerical results for test problems in simple geometry. Also, we will present the effectiveness in simulations of complex geometry, similar to the ZaP [2] experiment at the University of Washington. References: [1]Douglas S. Harned and D. D. Schnack, Semi-implicit method for long time scale magnetohy drodynamic computations in three dimensions, JCP, Volume 65, Issue 1, July 1986, Pages 57-70. [2]U. Shumlak, B. A. Nelson, R. P. Golingo, S. L. Jackson, E. A. Crawford, and D. J. Den Hartog, Sheared flow stabilization experiments in the ZaP flow Zpinch, Phys. Plasmas 10, 1683 (2003).

  7. Continuous Glucose Monitoring: Real-Time Algorithms for Calibration, Filtering, and Alarms

    PubMed Central

    Bequette, B. Wayne

    2010-01-01

    Algorithms for real-time use in continuous glucose monitors are reviewed, including calibration, filtering of noisy signals, glucose predictions for hypoglycemic and hyperglycemic alarms, compensation for capillary blood glucose to sensor time lags, and fault detection for sensor degradation and dropouts. A tutorial on Kalman filtering for real-time estimation, prediction, and lag compensation is presented and demonstrated via simulation examples. A limited number of fault detection methods for signal degradation and dropout have been published, making that an important area for future work. PMID:20307402

  8. Statistical analysis of piloted simulation of real time trajectory optimization algorithms

    NASA Technical Reports Server (NTRS)

    Price, D. B.

    1982-01-01

    A simulation of time-optimal intercept algorithms for on-board computation of control commands is described. The effects of three different display modes and two different computation modes on the pilots' ability to intercept a moving target in minimum time were tested. Both computation modes employed singular perturbation theory to help simplify the two-point boundary value problem associated with trajectory optimization. Target intercept time was affected by both the display and computation modes chosen, but the display mode chosen was the only significant influence on the miss distance.

  9. Building a better leapfrog. [an algorithm for ensuring time symmetry in any integration scheme

    NASA Technical Reports Server (NTRS)

    Hut, Piet; Makino, Jun; Mcmillan, Steve

    1995-01-01

    In stellar dynamical computer simulations, as well as other types of simulations using particles, time step size is often held constant in order to guarantee a high degree of energy conservation. In many applications, allowing the time step size to change in time can offer a great saving in computational cost, but variable-size time steps usually imply a substantial degradation in energy conservation. We present a meta-algorithm' for choosing time steps in such a way as to guarantee time symmetry in any integration scheme, thus allowing vastly improved energy conservation for orbital calculations with variable time steps. We apply the algorithm to the familiar leapfrog scheme, and generalize to higher order integration schemes, showing how the stability properties of the fixed-step leapfrog scheme can be extended to higher order, variable-step integrators such as the Hermite method. We illustrate the remarkable properties of these time-symmetric integrators for the case of a highly eccentric elliptical Kepler orbit and discuss applications to more complex problems.

  10. Representation independent algorithms for molecular response calculations in time-dependent self-consistent field theories

    SciTech Connect

    Tretiak, Sergei

    2008-01-01

    Four different numerical algorithms suitable for a linear scaling implementation of time-dependent Hartree-Fock and Kohn-Sham self-consistent field theories are examined. We compare the performance of modified Lanczos, Arooldi, Davidson, and Rayleigh quotient iterative procedures to solve the random-phase approximation (RPA) (non-Hermitian) and Tamm-Dancoff approximation (TDA) (Hermitian) eigenvalue equations in the molecular orbital-free framework. Semiempirical Hamiltonian models are used to numerically benchmark algorithms for the computation of excited states of realistic molecular systems (conjugated polymers and carbon nanotubes). Convergence behavior and stability are tested with respect to a numerical noise imposed to simulate linear scaling conditions. The results single out the most suitable procedures for linear scaling large-scale time-dependent perturbation theory calculations of electronic excitations.

  11. Representation independent algorithms for molecular response calculations in time-dependent self-consistent field theories

    NASA Astrophysics Data System (ADS)

    Tretiak, Sergei; Isborn, Christine M.; Niklasson, Anders M. N.; Challacombe, Matt

    2009-02-01

    Four different numerical algorithms suitable for a linear scaling implementation of time-dependent Hartree-Fock and Kohn-Sham self-consistent field theories are examined. We compare the performance of modified Lanczos, Arooldi, Davidson, and Rayleigh quotient iterative procedures to solve the random-phase approximation (RPA) (non-Hermitian) and Tamm-Dancoff approximation (TDA) (Hermitian) eigenvalue equations in the molecular orbital-free framework. Semiempirical Hamiltonian models are used to numerically benchmark algorithms for the computation of excited states of realistic molecular systems (conjugated polymers and carbon nanotubes). Convergence behavior and stability are tested with respect to a numerical noise imposed to simulate linear scaling conditions. The results single out the most suitable procedures for linear scaling large-scale time-dependent perturbation theory calculations of electronic excitations.

  12. Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels

    NASA Astrophysics Data System (ADS)

    Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.

    2016-06-01

    We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.

  13. An integrated optimal control algorithm for discrete-time nonlinear stochastic system

    NASA Astrophysics Data System (ADS)

    Kek, Sie Long; Lay Teo, Kok; Mohd Ismail, A. A.

    2010-12-01

    Consider a discrete-time nonlinear system with random disturbances appearing in the real plant and the output channel where the randomly perturbed output is measurable. An iterative procedure based on the linear quadratic Gaussian optimal control model is developed for solving the optimal control of this stochastic system. The optimal state estimate provided by Kalman filtering theory and the optimal control law obtained from the linear quadratic regulator problem are then integrated into the dynamic integrated system optimisation and parameter estimation algorithm. The iterative solutions of the optimal control problem for the model obtained converge to the solution of the original optimal control problem of the discrete-time nonlinear system, despite model-reality differences, when the convergence is achieved. An illustrative example is solved using the method proposed. The results obtained show the effectiveness of the algorithm proposed.

  14. New Algorithms for Computing the Time-to-Collision in Freeway Traffic Simulation Models

    PubMed Central

    Hou, Jia; List, George F.; Guo, Xiucheng

    2014-01-01

    Ways to estimate the time-to-collision are explored. In the context of traffic simulation models, classical lane-based notions of vehicle location are relaxed and new, fast, and efficient algorithms are examined. With trajectory conflicts being the main focus, computational procedures are explored which use a two-dimensional coordinate system to track the vehicle trajectories and assess conflicts. Vector-based kinematic variables are used to support the calculations. Algorithms based on boxes, circles, and ellipses are considered. Their performance is evaluated in the context of computational complexity and solution time. Results from these analyses suggest promise for effective and efficient analyses. A combined computation process is found to be very effective. PMID:25628650

  15. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  16. Motor Execution Affects Action Prediction

    ERIC Educational Resources Information Center

    Springer, Anne; Brandstadter, Simone; Liepelt, Roman; Birngruber, Teresa; Giese, Martin; Mechsner, Franz; Prinz, Wolfgang

    2011-01-01

    Previous studies provided evidence of the claim that the prediction of occluded action involves real-time simulation. We report two experiments that aimed to study how real-time simulation is affected by simultaneous action execution under conditions of full, partial or no overlap between observed and executed actions. This overlap was analysed by…

  17. The Time Series Technique for Aerosol Retrievals over Land from MODIS: Algorithm MAIAC

    NASA Technical Reports Server (NTRS)

    Lyapustin, Alexei; Wang, Yujie

    2008-01-01

    2.1 m channel (B7) for the purpose of aerosol retrieval. Obviously, the subsequent atmospheric correction will produce the same SR in the red and blue bands as predicted, i.e. an empirical function of 2.1. In other words, the spectral, spatial and temporal variability of surface reflectance in the Blue and Red bands appears borrowed from band B7. This may have certain implications for the vegetation and global carbon analysis because the chlorophyll-sensing bands B1, B3 are effectively substituted in terms of variability by band B7, which is sensitive to the plant liquid water. This chapter describes a new recently developed generic aerosol-surface retrieval algorithm for MODIS. The Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm simultaneously retrieves AOT and surface bi-directional reflection factor (BRF) using the time series of MODIS measurements.

  18. Algorithms and Heuristics for Time-Window-Constrained Traveling Salesman Problems.

    DTIC Science & Technology

    1985-09-01

    California G0 lI DTIC S-. E CTE 985 THESIS ALGORITHMS AND HEURISTICS FOR TIME-WI NDOW-CONSTRAINED TRAVELING SALESMAN PROBLEMS by Chun, Bock Jin and Lee...strained Traveling Salesman Problems September 1985 6. PERFORMING ORo. REPORT NUMBER 7. AUTHOR() 0. CONTRACT OR GRANT NUMBER(e) Chun, Bock Jin Lee...Penalty Cost, Traveling Salesman Problem, State-Space Relaxation 20. ABSTRACT (Continue on reverse side It necessary and Identify by block ns ber) This

  19. Algorithms for the determination of the time delays of the signal when using unequal detectors

    NASA Technical Reports Server (NTRS)

    Novak, B. L.

    1979-01-01

    In treating the recorded results from detectors at different locations in space, the analysis of the time delays of signals is crucial to locating the sources of detected radiation. Because the correlation method requires the manipulation of awkward matrices to evaluate its accuracy, a solution is outlined based on minimizing the sum of the squares of signal deviations, and the algorithms for evaluating the resulting error are presented.

  20. Some Fractal Dimension Algorithms and Their Application to Time Series Associated with the Dst a Geomagnetic Index

    NASA Astrophysics Data System (ADS)

    Cervantes, F.; Gonzalez, J.; Real, C.; Hoyos, L.

    2012-12-01

    ABSTRACT: Chaotic invariants like fractal dimensions are used to characterize non-linear time series. The fractal dimension is an important characteristic of fractals that contains information about their geometrical structure at multiple scales. In this work four fractal dimension estimation algorithms are applied to non-linear time series. The algorithms employed are the Higuchi's algorithm, the Petrosian's algorithm, the Katz's Algorithm and the Box counting method. The analyzed time series are associated with natural phenomena, the Dst a geomagnetic index which monitors the world wide magnetic storm; the Dst index is a global indicator of the state of the Earth's geomagnetic activity. The time series used in this work show a behavior self-similar, which depend on the time scale of measurements. It is also observed that fractal dimensions may not be constant over all time scales.

  1. Cubic time algorithms of amalgamating gene trees and building evolutionary scenarios

    PubMed Central

    2012-01-01

    Background A long recognized problem is the inference of the supertree S that amalgamates a given set {Gj} of trees Gj, with leaves in each Gj being assigned homologous elements. We ground on an approach to find the tree S by minimizing the total cost of mappings αj of individual gene trees Gj into S. Traditionally, this cost is defined basically as a sum of duplications and gaps in each αj. The classical problem is to minimize the total cost, where S runs over the set of all trees that contain an exhaustive non-redundant set of species from all input Gj. Results We suggest a reformulation of the classical NP-hard problem of building a supertree in terms of the global minimization of the same cost functional but only over species trees S that consist of clades belonging to a fixed set P (e.g., an exhaustive set of clades in all Gj). We developed a deterministic solving algorithm with a low degree polynomial (typically cubic) time complexity with respect to the size of input data. We define an extensive set of elementary evolutionary events and suggest an original definition of mapping β of tree G into tree S. We introduce the cost functional c(G, S, f ) and define the mapping β as the global minimum of this functional with respect to the variable f, in which sense it is a generalization of classical mapping α. We suggest a reformulation of the classical NP-hard mapping (reconciliation) problem by introducing time slices into the species tree S and present a cubic time solving algorithm to compute the mapping β. We introduce two novel definitions of the evolutionary scenario based on mapping β or a random process of gene evolution along a species tree. Conclusions Developed algorithms are mathematically proved, which justifies the following statements. The supertree building algorithm finds exactly the global minimum of the total cost if only gene duplications and losses are allowed and the given sets of gene trees satisfies a certain condition. The mapping

  2. Using Hierarchical Time Series Clustering Algorithm and Wavelet Classifier for Biometric Voice Classification

    PubMed Central

    Fong, Simon

    2012-01-01

    Voice biometrics has a long history in biosecurity applications such as verification and identification based on characteristics of the human voice. The other application called voice classification which has its important role in grouping unlabelled voice samples, however, has not been widely studied in research. Lately voice classification is found useful in phone monitoring, classifying speakers' gender, ethnicity and emotion states, and so forth. In this paper, a collection of computational algorithms are proposed to support voice classification; the algorithms are a combination of hierarchical clustering, dynamic time wrap transform, discrete wavelet transform, and decision tree. The proposed algorithms are relatively more transparent and interpretable than the existing ones, though many techniques such as Artificial Neural Networks, Support Vector Machine, and Hidden Markov Model (which inherently function like a black box) have been applied for voice verification and voice identification. Two datasets, one that is generated synthetically and the other one empirically collected from past voice recognition experiment, are used to verify and demonstrate the effectiveness of our proposed voice classification algorithm. PMID:22619492

  3. Novel real-time volumetric tool segmentation algorithm for intraoperative microscope integrated OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Viehland, Christian; Keller, Brenton; Carrasco-Zevallos, Oscar; Cunefare, David; Shen, Liangbo; Toth, Cynthia; Farsiu, Sina; Izatt, Joseph A.

    2016-03-01

    Optical coherence tomography (OCT) allows for micron scale imaging of the human retina and cornea. Current generation research and commercial intrasurgical OCT prototypes are limited to live B-scan imaging. Our group has developed an intraoperative microscope integrated OCT system capable of live 4D imaging. With a heads up display (HUD) 4D imaging allows for dynamic intrasurgical visualization of tool tissue interaction and surgical maneuvers. Currently our system relies on operator based manual tracking to correct for patient motion and motion caused by the surgeon, to track the surgical tool, and to display the correct B-scan to display on the HUD. Even when tracking only bulk motion, the operator sometimes lags behind and the surgical region of interest can drift out of the OCT field of view. To facilitate imaging we report on the development of a fast volume based tool segmentation algorithm. The algorithm is based on a previously reported volume rendering algorithm and can identify both the tool and retinal surface. The algorithm requires 45 ms per volume for segmentation and can be used to actively place the B-scan across the tool tissue interface. Alternatively, real-time tool segmentation can be used to allow the surgeon to use the surgical tool as an interactive B-scan pointer.

  4. Time quantified detection of fetal movements using a new fetal movement algorithm.

    PubMed

    Lowery, C L; Russell, W A; Baggot, P J; Wilson, J D; Walls, R C; Bentz, L S; Murphy, P

    1997-01-01

    Primarily, the objective is to develop an automated ultrasound fetal movement detection system that will better characterize fetal movements. Secondarily, the objective is to develop an improved method of quantifying the performance of fetal movement detectors. We recorded 20-minute segments of fetal movement on 101 patients using a UAMS-developed fetal movement detection algorithm (Russell algorithm) and compared this to a Hewlett-Packard (HP) M-1350-A. Movements were recorded on a second-per-second basis by an expert examiner reviewing videotaped real-time ultrasound images. Videotape (86,592 seconds) was scored and compared with the electronic movement-detection systems. The Russell algorithm detected 95.53% of the discrete movements greater than 5 seconds, while the HP system (M-1350-A) detected only 86.08% of the discrete movements (p = 0.012). Both devices were less efficient at detecting the short discrete movements, obtaining sensitivities of 57.39 and 35.22, respectively. Neither system fully identifies fetal movement based on the second-per-second system. Improved methods of quantifying performance indicated that the Russell algorithm performed better than the HP on these patients.

  5. Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm.

    PubMed

    Zhang, Zhiyong; Smith, Pieter E S; Frydman, Lucio

    2014-11-21

    Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. By porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns.

  6. Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm

    SciTech Connect

    Zhang, Zhiyong; Smith, Pieter E. S.; Frydman, Lucio

    2014-11-21

    Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. By porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns.

  7. Development of a new time domain-based algorithm for train detection and axle counting

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.

    2015-12-01

    This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.

  8. AN ALGORITHM FOR THE ESTIMATION OF GESTATIONAL AGE AT THE TIME OF FETAL DEATH

    PubMed Central

    Conway, Deborah L.; Hansen, Nellie I.; Dudley, Donald J.; Parker, Corette B.; Reddy, Uma M.; Silver, Robert M.; Bukowski, Radek; Pinar, Halit; Stoll, Barbara J.; Varner, Michael W.; Saade, George R.; Hogue, Carol; Willinger, Marian; Coustan, Donald; Koch, Matthew A.; Goldenberg, Robert L.

    2013-01-01

    Background Accurate assignment of gestational age at time of fetal death is important for research and clinical practice. An algorithm to estimate gestational age (GA) at fetal death was developed and evaluated. Methods The algorithm developed by the Stillbirth Collaborative Research Network (SCRN) incorporated clinical and postmortem data. The SCRN conducted a population-based case-control study of women with stillbirths and live births from 2006 to 2008 in five geographic catchment areas. Rules were developed to estimate a due date, identify an interval during which death likely occurred, and estimate GA at the time of fetal death. Reliability of using fetal foot length to estimate GA at death was assessed. Results The due date estimated for 620 singleton stillbirths studied was considered clinically reliable for 87%. Only 25.2% of stillbirths were documented alive within two days before diagnosis and 47.6% within one week of diagnosis. The algorithm-derived estimate of GA at time of fetal death was 1 or more weeks earlier than the GA at delivery for 43.5% of stillbirths. GA estimated from fetal foot length agreed with GA by algorithm within two weeks for 75% within a subset of well-dated stillbirths. Conclusions Precise assignment of GA at death, defined as reliable dating criteria and a short interval (≤1 week) during which fetal death was known to have occurred, was possible in 46.6% of cases. Fetal foot length is a relatively accurate measure of GA at death and should be collected in all stillbirth cases. PMID:23374059

  9. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  10. A universal framework for non-deteriorating time-domain numerical algorithms in Maxwell's electrodynamics

    NASA Astrophysics Data System (ADS)

    Fedoseyev, A.; Kansa, E. J.; Tsynkov, S.; Petropavlovskiy, S.; Osintcev, M.; Shumlak, U.; Henshaw, W. D.

    2016-10-01

    We present the implementation of the Lacuna method, that removes a key diffculty that currently hampers many existing methods for computing unsteady electromagnetic waves on unbounded regions. Numerical accuracy and/or stability may deterio-rate over long times due to the treatment of artificial outer boundaries. We describe a developed universal algorithm and software that correct this problem by employing the Huygens' principle and lacunae of Maxwell's equations. The algorithm provides a temporally uniform guaranteed error bound (no deterioration at all), and the software will enable robust electromagnetic simulations in a high-performance computing environment. The methodology applies to any geometry, any scheme, and any boundary condition. It eliminates the long-time deterioration regardless of its origin and how it manifests itself. In retrospect, the lacunae method was first proposed by V. Ryaben'kii and subsequently developed by S. Tsynkov. We have completed development of an innovative numerical methodology for high fidelity error-controlled modeling of a broad variety of electromagnetic and other wave phenomena. Proof-of-concept 3D computations have been conducted that con-vincingly demonstrate the feasibility and effciency of the proposed approach. Our algorithms are being implemented as robust commercial software tools in a standalone module to be combined with existing numerical schemes in several widely used computational electromagnetic codes.

  11. Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture

    NASA Astrophysics Data System (ADS)

    Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea

    2014-05-01

    Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.

  12. On the spectral stability of time integration algorithms for a class of constrained dynamics problems

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Crivelli, Luis; Geradin, Michel

    1993-01-01

    Incomplete field formulations have recently been the subject of intense research because of their potential in coupled analysis of independently modeled substructures, adaptive refinement, domain decomposition, and parallel processing. This paper discusses the design and analysis of time-integration algorithms for these formulations and emphasizes the treatment of their inter-subdomain constraint equations. These constraints are shown to introduce a destabilizing effect in the dynamic system that can be analyzed by investigating the behavior of the time-integration algorithm at infinite and zero frequencies. Three different approaches for constructing penalty-free unconditionally stable second-order accurate solution procedures for this class of hybrid formulations are presented, discussed and illustrated with numerical examples. The theoretical results presented in this paper also apply to a large family of nonlinear multibody dynamics formulations. Some of the algorithms outlined herein are important alternatives to the popular technique consisting of transforming differential/algebraic equations into ordinary differential equations via the introduction of a stabilization term that depends on arbitrary constants and that influences the computed so1ution.

  13. Unit Template Synchronous Reference Frame Theory Based Control Algorithm for DSTATCOM

    NASA Astrophysics Data System (ADS)

    Bangarraju, J.; Rajagopal, V.; Jayalaxmi, A.

    2014-04-01

    This article proposes new and simplified unit templates instead of standard phase locked loop (PLL) for Synchronous Reference Frame Theory Control Algorithm (SRFT). The extraction of synchronizing components (sinθ and cosθ) for parks and inverse parks transformation using standard PLL takes more execution time. This execution time in control algorithm delays the extraction of reference source current generation. The standard PLL not only takes more execution time but also increases the reactive power burden on the Distributed Static Compensator (DSTATCOM). This work proposes a unit template based SRFT control algorithm for four-leg insulated gate bipolar transistor based voltage source converter for DSTATCOM in distribution systems. This will reduce the execution time and reactive power burden on the DSTATCOM. The proposed DSTATCOM suppress harmonics, regulates the terminal voltage along with neutral current compensation. The DSTATCOM in distribution systems with proposed control algorithm is modeled and simulated using MATLAB using SIMULINK and Simpower systems toolboxes.

  14. Local algorithm for computing complex travel time based on the complex eikonal equation

    NASA Astrophysics Data System (ADS)

    Huang, Xingguo; Sun, Jianguo; Sun, Zhangqing

    2016-04-01

    The traditional algorithm for computing the complex travel time, e.g., dynamic ray tracing method, is based on the paraxial ray approximation, which exploits the second-order Taylor expansion. Consequently, the computed results are strongly dependent on the width of the ray tube and, in regions with dramatic velocity variations, it is difficult for the method to account for the velocity variations. When solving the complex eikonal equation, the paraxial ray approximation can be avoided and no second-order Taylor expansion is required. However, this process is time consuming. In this case, we may replace the global computation of the whole model with local computation by taking both sides of the ray as curved boundaries of the evanescent wave. For a given ray, the imaginary part of the complex travel time should be zero on the central ray. To satisfy this condition, the central ray should be taken as a curved boundary. We propose a nonuniform grid-based finite difference scheme to solve the curved boundary problem. In addition, we apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno technology for obtaining the imaginary slowness used to compute the complex travel time. The numerical experiments show that the proposed method is accurate. We examine the effectiveness of the algorithm for the complex travel time by comparing the results with those from the dynamic ray tracing method and the Gauss-Newton Conjugate Gradient fast marching method.

  15. Multiobjective Vehicle Routing Problems With Simultaneous Delivery and Pickup and Time Windows: Formulation, Instances, and Algorithms.

    PubMed

    Wang, Jiahai; Zhou, Ying; Wang, Yong; Zhang, Jun; Chen, C L Philip; Zheng, Zibin

    2016-03-01

    This paper investigates a practical variant of the vehicle routing problem (VRP), called VRP with simultaneous delivery and pickup and time windows (VRPSDPTW), in the logistics industry. VRPSDPTW is an important logistics problem in closed-loop supply chain network optimization. VRPSDPTW exhibits multiobjective properties in real-world applications. In this paper, a general multiobjective VRPSDPTW (MO-VRPSDPTW) with five objectives is first defined, and then a set of MO-VRPSDPTW instances based on data from the real-world are introduced. These instances represent more realistic multiobjective nature and more challenging MO-VRPSDPTW cases. Finally, two algorithms, multiobjective local search (MOLS) and multiobjective memetic algorithm (MOMA), are designed, implemented and compared for solving MO-VRPSDPTW. The simulation results on the proposed real-world instances and traditional instances show that MOLS outperforms MOMA in most of instances. However, the superiority of MOLS over MOMA in real-world instances is not so obvious as in traditional instances.

  16. Television and children's executive function.

    PubMed

    Lillard, Angeline S; Li, Hui; Boguszewski, Katie

    2015-01-01

    Children spend a lot of time watching television on its many platforms: directly, online, and via videos and DVDs. Many researchers are concerned that some types of television content appear to negatively influence children's executive function. Because (1) executive function predicts key developmental outcomes, (2) executive function appears to be influenced by some television content, and (3) American children watch large quantities of television (including the content of concern), the issues discussed here comprise a crucial public health issue. Further research is needed to reveal exactly what television content is implicated, what underlies television's effect on executive function, how long the effect lasts, and who is affected.

  17. Control of discrete time systems based on recurrent Super-Twisting-like algorithm.

    PubMed

    Salgado, I; Kamal, S; Bandyopadhyay, B; Chairez, I; Fridman, L

    2016-09-01

    Most of the research in sliding mode theory has been carried out to in continuous time to solve the estimation and control problems. However, in discrete time, the results in high order sliding modes have been less developed. In this paper, a discrete time super-twisting-like algorithm (DSTA) was proposed to solve the problems of control and state estimation. The stability proof was developed in terms of the discrete time Lyapunov approach and the linear matrix inequalities theory. The system trajectories were ultimately bounded inside a small region dependent on the sampling period. Simulation results tested the DSTA. The DSTA was applied as a controller for a Furuta pendulum and for a DC motor supplied by a DSTA signal differentiator.

  18. Algorithms for computing the time-corrected instantaneous frequency (reassigned) spectrogram, with applications.

    PubMed

    Fulop, Sean A; Fitz, Kelly

    2006-01-01

    A modification of the spectrogram (log magnitude of the short-time Fourier transform) to more accurately show the instantaneous frequencies of signal components was first proposed in 1976 [Kodera et al., Phys. Earth Planet. Inter. 12, 142-150 (1976)], and has been considered or reinvented a few times since but never widely adopted. This paper presents a unified theoretical picture of this time-frequency analysis method, the time-corrected instantaneous frequency spectrogram, together with detailed implementable algorithms comparing three published techniques for its computation. The new representation is evaluated against the conventional spectrogram for its superior ability to track signal components. The lack of a uniform framework for either mathematics or implementation details which has characterized the disparate literature on the schemes has been remedied here. Fruitful application of the method is shown in the realms of speech phonation analysis, whale song pitch tracking, and additive sound modeling.

  19. A Time-Optimal On-the-Fly Parallel Algorithm for Model Checking of Weak LTL Properties

    NASA Astrophysics Data System (ADS)

    Barnat, Jiří; Brim, Luboš; Ročkai, Petr

    One of the most important open problems of parallel LTL model-checking is to design an on-the-fly scalable parallel algorithm with linear time complexity. Such an algorithm would give the optimality we have in sequential LTL model-checking. In this paper we give a partial solution to the problem. We propose an algorithm that has the required properties for a very rich subset of LTL properties, namely those expressible by weak Büchi automata.

  20. Time controlled descent guidance algorithm for simulation of advanced ATC systems

    NASA Technical Reports Server (NTRS)

    Lee, H. Q.; Erzberger, H.

    1983-01-01

    Concepts and computer algorithms for generating time controlled four dimensional descent trajectories are described. The algorithms were implemented in the air traffic control simulator and used by experienced controllers in studies of advanced air traffic flow management procedures. A time controlled descent trajectory comprises a vector function of time, including position, altitude, and heading, that starts at the initial position of the aircraft and ends at touchdown. The trajectory provides a four dimensional reference path which will cause an aircraft tracking it to touchdown at a predetermined time with a minimum of fuel consumption. The problem of constructing such trajectories is divided into three subproblems involving synthesis of horizontal, vertical, and speed profiles. The horizontal profile is constructed as a sequence of turns and straight lines passing through a specified set of waypoints. The vertical profile consists of a sequence of level flight and constant descent angle segments defined by altitude waypoints. The speed profile is synthesized as a sequence of constant Mach number, constant indicated airspeed, and acceleration/deceleration legs. It is generated by integrating point mass differential equations of motion, which include the thrust and drag models of the aircraft.

  1. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  2. Executive summary

    NASA Technical Reports Server (NTRS)

    Ayon, Juan A.

    1992-01-01

    The Astrotech 21 Optical Systems Technology Workshop was held in Pasadena, California on March 6-8, 1991. The purpose of the workshop was to examine the state of Optical Systems Technology at the National Aeronautics Space Administration (NASA), and in industry and academia, in view of the potential Astrophysics mission set currently being considered for the late 1990's through the first quarter of the 21st century. The principal result of the workshop is this publication, which contains an assessment of the current state of the technology, and specific technology advances in six critical areas of optics, all necessary for the mission set. The workshop was divided into six panels, each of about a dozen experts in specific fields, representing NASA, industry, and academia. In addition, each panel contained expertise that spanned the spectrum from x-ray to submillimeter wavelengths. This executive summary contains the principal recommendations of each panel. The six technology panels and their chairs were: (1) Wavefront Sensing, Control, and Pointing, Thomas Pitts, Itek Optical Systems, A Division of Litton; (2) Fabrication, Roger Angel, Steward Observatory, University of Arizona; (3) Materials and Structures, Theodore Saito, Lawrence Livermore National Laboratory; (4) Optical Testing, James Wyant, WYKO Corporation; (5) Optical Systems Integrated Modeling, Robert R. Shannon, Optical Sciences Center, University of Arizona; and (6) Advanced Optical Instruments Technology, Michael Shao, Jet Propulsion Laboratory, California Institute of Technology. This Executive Summary contains the principal recommendations of each panel.

  3. Decomposing time series data by a non-negative matrix factorization algorithm with temporally constrained coefficients.

    PubMed

    Cheung, Vincent C K; Devarajan, Karthik; Severini, Giacomo; Turolla, Andrea; Bonato, Paolo

    2015-08-01

    The non-negative matrix factorization algorithm (NMF) decomposes a data matrix into a set of non-negative basis vectors, each scaled by a coefficient. In its original formulation, the NMF assumes the data samples and dimensions to be independently distributed, making it a less-than-ideal algorithm for the analysis of time series data with temporal correlations. Here, we seek to derive an NMF that accounts for temporal dependencies in the data by explicitly incorporating a very simple temporal constraint for the coefficients into the NMF update rules. We applied the modified algorithm to 2 multi-dimensional electromyographic data sets collected from the human upper-limb to identify muscle synergies. We found that because it reduced the number of free parameters in the model, our modified NMF made it possible to use the Akaike Information Criterion to objectively identify a model order (i.e., the number of muscle synergies composing the data) that is more functionally interpretable, and closer to the numbers previously determined using ad hoc measures.

  4. Application of the Trend Filtering Algorithm for Photometric Time Series Data

    NASA Astrophysics Data System (ADS)

    Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.

    2016-08-01

    Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.

  5. Evaluation of the Massachusetts Expanded Learning Time (ELT) Initiative. Year Five Final Report: 2010-2011. Executive Summary

    ERIC Educational Resources Information Center

    Checkoway, Amy; Gamse, Beth; Velez, Melissa; Caven, Meghan; de la Cruz, Rodolfo; Donoghue, Nathaniel; Kliorys, Kristina; Linkow, Tamara; Luck, Rachel; Sahni, Sarah; Woodford, Michelle

    2012-01-01

    The Massachusetts Expanded Learning Time (ELT) initiative was established in 2005 with planning grants that allowed a limited number of schools to explore a redesign of their respective schedules and add time to their day or year. Participating schools are required to expand learning time by at least 300 hours per academic year to improve student…

  6. Development of novel algorithm and real-time monitoring ambulatory system using Bluetooth module for fall detection in the elderly.

    PubMed

    Hwang, J Y; Kang, J M; Jang, Y W; Kim, H

    2004-01-01

    Novel algorithm and real-time ambulatory monitoring system for fall detection in elderly people is described. Our system is comprised of accelerometer, tilt sensor and gyroscope. For real-time monitoring, we used Bluetooth. Accelerometer measures kinetic force, tilt sensor and gyroscope estimates body posture. Also, we suggested algorithm using signals which obtained from the system attached to the chest for fall detection. To evaluate our system and algorithm, we experimented on three people aged over 26 years. The experiment of four cases such as forward fall, backward fall, side fall and sit-stand was repeated ten times and the experiment in daily life activity was performed one time to each subject. These experiments showed that our system and algorithm could distinguish between falling and daily life activity. Moreover, the accuracy of fall detection is 96.7%. Our system is especially adapted for long-time and real-time ambulatory monitoring of elderly people in emergency situation.

  7. The development and concurrent validity of a real-time algorithm for temporal gait analysis using inertial measurement units.

    PubMed

    Allseits, E; Lučarević, J; Gailey, R; Agrawal, V; Gaunaurd, I; Bennett, C

    2017-04-11

    The use of inertial measurement units (IMUs) for gait analysis has emerged as a tool for clinical applications. Shank gyroscope signals have been utilized to identify heel-strike and toe-off, which serve as the foundation for calculating temporal parameters of gait such as single and double limb support time. Recent publications have shown that toe-off occurs later than predicted by the dual minima method (DMM), which has been adopted as an IMU-based gait event detection algorithm.In this study, a real-time algorithm, Noise-Zero Crossing (NZC), was developed to accurately compute temporal gait parameters. Our objective was to determine the concurrent validity of temporal gait parameters derived from the NZC algorithm against parameters measured by an instrumented walkway. The accuracy and precision of temporal gait parameters derived using NZC were compared to those derived using the DMM. The results from Bland-Altman Analysis showed that the NZC algorithm had excellent agreement with the instrumented walkway for identifying the temporal gait parameters of Gait Cycle Time (GCT), Single Limb Support (SLS) time, and Double Limb Support (DLS) time. By utilizing the moment of zero shank angular velocity to identify toe-off, the NZC algorithm performed better than the DMM algorithm in measuring SLS and DLS times. Utilizing the NZC algorithm's gait event detection preserves DLS time, which has significant clinical implications for pathologic gait assessment.

  8. A real-time pressure estimation algorithm for closed-loop combustion control

    NASA Astrophysics Data System (ADS)

    Al-Durra, Ahmed; Canova, Marcello; Yurkovich, Stephen

    2013-07-01

    The cylinder pressure is arguably the most important variable characterizing the combustion process in internal combustion engines. In light of the recent advances in combustion technologies and in engine control, the use of cylinder pressure is now frequently considered as a feedback signal for closed-loop combustion control algorithms. In order to generate an accurate pressure trace for real-time combustion control and diagnostics, the output of the in-cylinder pressure transducer must be conditioned with signal processing methods to mitigate the well-known issues of offset and noise. While several techniques have been proposed for processing the cylinder pressure signal with limited computational burden, most of the available methods still require one to apply low-pass filters or moving average windows in order to mitigate the noise. This ultimately limits the opportunity of exploiting the in-cylinder pressure feedback for a cycle-by-cycle control of the combustion process. To this extent, this paper presents an estimation algorithm that extracts the pressure signal from the in-cylinder sensor in real-time, allowing for estimating the 50% burn rate location and IMEP on a cycle-by-cycle basis. The proposed approach relies on a model-based estimation algorithm whose starting point is a crank-angle based engine combustion model that predicts the in-cylinder pressure from the definition of a burn rate function. Linear parameter varying (LPV) techniques are then used to expand the region of estimation to cover the engine operating map, as well as allowing for real-time cylinder estimation during transients. The estimator is tested on the experimental data collected on an engine dynamometer as well as on a high-fidelity engine simulator. The results obtained show the effectiveness of the estimator in reconstructing the cylinder pressure on a crank-angle basis and in rejecting measurement noise and modeling errors, with considerably low computation effort.

  9. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  10. Novel algorithm implementations in DARC: the Durham AO real-time controller

    NASA Astrophysics Data System (ADS)

    Basden, Alastair; Bitenc, Urban; Jenkins, David

    2016-07-01

    The Durham AO Real-time Controller has been used on-sky with the CANARY AO demonstrator instrument since 2010, and is also used to provide control for several AO test-benches, including DRAGON. Over this period, many new real-time algorithms have been developed, implemented and demonstrated, leading to performance improvements for CANARY. Additionally, the computational performance of this real-time system has continued to improve. Here, we provide details about recent updates and changes made to DARC, and the relevance of these updates, including new algorithms, to forthcoming AO systems. We present the computational performance of DARC when used on different hardware platforms, including hardware accelerators, and determine the relevance and potential for ELT scale systems. Recent updates to DARC have included algorithms to handle elongated laser guide star images, including correlation wavefront sensing, with options to automatically update references during AO loop operation. Additionally, sub-aperture masking options have been developed to increase signal to noise ratio when operating with non-symmetrical wavefront sensor images. The development of end-user tools has progressed with new options for configuration and control of the system. New wavefront sensor camera models and DM models have been integrated with the system, increasing the number of possible hardware configurations available, and a fully open-source AO system is now a reality, including drivers necessary for commercial cameras and DMs. The computational performance of DARC makes it suitable for ELT scale systems when implemented on suitable hardware. We present tests made on different hardware platforms, along with the strategies taken to optimise DARC for these systems.

  11. A Discussion of the Discrete Fourier Transform Execution on a Typical Desktop PC

    NASA Technical Reports Server (NTRS)

    White, Michael J.

    2006-01-01

    This paper will discuss and compare the execution times of three examples of the Discrete Fourier Transform (DFT). The first two examples will demonstrate the direct implementation of the algorithm. In the first example, the Fourier coefficients are generated at the execution of the DFT. In the second example, the coefficients are generated prior to execution and the DFT coefficients are indexed at execution. The last example will demonstrate the Cooley- Tukey algorithm, better known as the Fast Fourier Transform. All examples were written in C executed on a PC using a Pentium 4 running at 1.7 Ghz. As a function of N, the total complex data size, the direct implementation DFT executes, as expected at order of N2 and the FFT executes at order of N log2 N. At N=16K, there is an increase in processing time beyond what is expected. This is not caused by implementation but is a consequence of the effect that machine architecture and memory hierarchy has on implementation. This paper will include a brief overview of digital signal processing, along with a discussion of contemporary work with discrete Fourier processing.

  12. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm

    PubMed Central

    Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît

    2016-01-01

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  13. Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahdavi, Seyed Hossein; Razak, Hashim Abdul

    2016-06-01

    This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.

  14. An algorithm for the localization of multiple interfering sperm whales using multi-sensor time difference of arrival.

    PubMed

    Baggenstoss, Paul M

    2011-07-01

    In this paper an algorithm is described for the localization of individual sperm whales in situations where several near-by animals are simultaneously vocalizing. The algorithm operates on time-difference of arrival (TDOA) measurements observed at sensor pairs and assumes no prior knowledge of the TDOA-whale associations. In other words, it solves the problem of associating TDOAs to whales. The algorithm is able to resolve association disputes where a given TDOA measurement may fit to more than one position estimate and can handle spurious TDOAs. The algorithm also provides estimates of Cramer-Rao lower bound for the position estimates. The algorithm was tested with real data using TDOA estimates obtained by cross-correlating click-trains. The click-trains were generated by a separate algorithm that operated independently on each sensor to produce click-trains corresponding to a given whale and to reject click-trains from reflected propagation paths.

  15. Diagnosis of Time of Increased Probability of strong earthquakes in different regions of the world: algorithm CN

    NASA Astrophysics Data System (ADS)

    Keilis-Borok, V. I.; Rotwain, I. M.

    An algorithm for intermediate-term earthquake prediction is suggested which allows diagnosis of the times of increased probability of strong earthquakes (TIPs). TIPs are declared for the time period of one year and an area with linear dimensions of a few hundred kilometers, and can be extended in time. The algorithm is based on the following traits of an earthquake flow: level of seismic activity; its temporal variation; clustering of earthquakes in space and time; their concentration in space; and their long-range interaction. The algorithm is normalized so that it can be applied in various regions without readaptation. TIPs, diagnosed by the algorithm, precede ˜ 80% of strong earthquakes and take on average ˜ 24% of the total time.

  16. Optimizing the decomposition of soil moisture time-series data using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Kulkarni, C.; Mengshoel, O. J.; Basak, A.; Schmidt, K. M.

    2015-12-01

    The task of determining near-surface volumetric water content (VWC), using commonly available dielectric sensors (based upon capacitance or frequency domain technology), is made challenging due to the presence of "noise" such as temperature-driven diurnal variations in the recorded data. We analyzed a post-wildfire rainfall and runoff monitoring dataset for hazard studies in Southern California. VWC was measured with EC-5 sensors manufactured by Decagon Devices. Many traditional signal smoothing techniques such as moving averages, splines, and Loess smoothing exist. Unfortunately, when applied to our post-wildfire dataset, these techniques diminish maxima, introduce time shifts, and diminish signal details. A promising seasonal trend-decomposition procedure based on Loess (STL) decomposes VWC time series into trend, seasonality, and remainder components. Unfortunately, STL with its default parameters produces similar results as previously mentioned smoothing methods. We propose a novel method to optimize seasonal decomposition using STL with genetic algorithms. This method successfully reduces "noise" including diurnal variations while preserving maxima, minima, and signal detail. Better decomposition results for the post-wildfire VWC dataset were achieved by optimizing STL's control parameters using genetic algorithms. The genetic algorithms minimize an additive objective function with three weighted terms: (i) root mean squared error (RMSE) of straight line relative to STL trend line; (ii) range of STL remainder; and (iii) variance of STL remainder. Our optimized STL method, combining trend and remainder, provides an improved representation of signal details by preserving maxima and minima as compared to the traditional smoothing techniques for the post-wildfire rainfall and runoff monitoring data. This method identifies short- and long-term VWC seasonality and provides trend and remainder data suitable for forecasting VWC in response to precipitation.

  17. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  18. A new comparison of hyperspectral anomaly detection algorithms for real-time applications

    NASA Astrophysics Data System (ADS)

    Díaz, María.; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing. An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by

  19. Continuous time boolean modeling for biological signaling: application of Gillespie algorithm

    PubMed Central

    2012-01-01

    Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. Background There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Results Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of

  20. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  1. Improved Data Preprocessing Algorithm for Time-Domain Induced Polarization Method with Digital Notch Filter

    NASA Astrophysics Data System (ADS)

    Ge, Shuang-Chao; Deng, Ming; Chen, Kai; Li, Bin; Li, Yuan

    2016-12-01

    Time-domain induced polarization (TDIP) measurement is seriously affected by power line interference and other field noise. Moreover, existing TDIP instruments generally output only the apparent chargeability, without providing complete secondary field information. To increase the robustness of TDIP method against interference and obtain more detailed secondary field information, an improved dataprocessing algorithm is proposed here. This method includes an efficient digital notch filter which can effectively eliminate all the main components of the power line interference. Hardware model of this filter was constructed and Vhsic Hardware Description Language code for it was generated using Digital Signal Processor Builder. In addition, a time-location method was proposed to extract secondary field information in case of unexpected data loss or failure of the synchronous technologies. Finally, the validity and accuracy of the method and the notch filter were verified by using the Cole-Cole model implemented by SIMULINK software. Moreover, indoor and field tests confirmed the application effect of the algorithm in the fieldwork.

  2. A fast algorithm to compute precise type-2 centroids for real-time control applications.

    PubMed

    Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R

    2015-02-01

    An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.

  3. Numerical Analysis and Improved Algorithms for Lyapunov-Exponent Calculation of Discrete-Time Chaotic Systems

    NASA Astrophysics Data System (ADS)

    He, Jianbin; Yu, Simin; Cai, Jianping

    2016-12-01

    Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

  4. X-LUNA: Extending Free/Open Source Real Time Executive for On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Braga, P.; Henriques, L.; Zulianello, M.

    2008-08-01

    In this paper we present xLuna, a system based on the RTEMS [1] Real-Time Operating System that is able to run on demand a GNU/Linux Operating System [2] as RTEMS' lowest priority task. Linux runs in user-mode and in a different memory partition. This allows running Hard Real-Time tasks and Linux applications on the same system sharing the Hardware resources while keeping a safe isolation and the Real-Time characteristics of RTEMS. Communication between both Systems is possible through a loose coupled mechanism based on message queues. Currently only SPARC LEON2 processor with Memory Management Unit (MMU) is supported. The advantage in having two isolated systems is that non critical components are quickly developed or simply ported reducing time-to-market and budget.

  5. Development of a Near-Real Time Hail Damage Swath Identification Algorithm for Vegetation

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Lori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    The Midwest is home to one of the world's largest agricultural growing regions. Between the time period of late May through early September, and with irrigation and seasonal rainfall these crops are able to reach their full maturity. Using moderate to high resolution remote sensors, the monitoring of the vegetation can be achieved using the red and near-infrared wavelengths. These wavelengths allow for the calculation of vegetation indices, such as Normalized Difference Vegetation Index (NDVI). The vegetation growth and greenness, in this region, grows and evolves uniformly as the growing season progresses. However one of the biggest threats to Midwest vegetation during the time period is thunderstorms that bring large hail and damaging winds. Hail and wind damage to crops can be very expensive to crop growers and, damage can be spread over long swaths associated with the tracks of the damaging storms. Damage to the vegetation can be apparent in remotely sensed imagery and is visible from space after storms slightly damage the crops, allowing for changes to occur slowly over time as the crops wilt or more readily apparent if the storms strip material from the crops or destroy them completely. Previous work on identifying these hail damage swaths used manual interpretation by the way of moderate and higher resolution satellite imagery. With the development of an automated and near-real time hail swath damage identification algorithm, detection can be improved, and more damage indicators be created in a faster and more efficient way. The automated detection of hail damage swaths will examine short-term, large changes in the vegetation by differencing near-real time eight day NDVI composites and comparing them to post storm imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua and Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi NPP. In addition land surface temperatures from these instruments will be examined as

  6. Wireless acoustic modules for real-time data fusion using asynchronous sniper localization algorithms

    NASA Astrophysics Data System (ADS)

    Hengy, S.; De Mezzo, S.; Duffner, P.; Naz, P.

    2012-11-01

    The presence of snipers in modern conflicts leads to high insecurity for the soldiers. In order to improve the soldier's protection against this threat, the French German Research Institute of Saint-Louis (ISL) has been conducting studies in the domain of acoustic localization of shots. Mobile antennas mounted on the soldier's helmet were initially used for real-time detection, classification and localization of sniper shots. It showed good performances in land scenarios, but also in urban scenarios if the array was in the shot corridor, meaning that the microphones first detect the direct wave and then the reflections of the Mach and muzzle waves (15% distance estimation error compared to the actual shooter array distance). Fusing data sent by multiple sensor nodes distributed on the field showed some of the limitations of the technologies that have been implemented in ISL's demonstrators. Among others, the determination of the arrays' orientation was not accurate enough, thereby degrading the performance of data fusion. Some new solutions have been developed in the past year in order to obtain better performance for data fusion. Asynchronous localization algorithms have been developed and post-processed on data measured in both free-field and urban environments with acoustic modules on the line of sight of the shooter. These results are presented in the first part of the paper. The impact of GPS position estimation error is also discussed in the article in order to evaluate the possible use of those algorithms for real-time processing using mobile acoustic nodes. In the frame of ISL's transverse project IMOTEP (IMprovement Of optical and acoustical TEchnologies for the Protection), some demonstrators are developed that will allow real-time asynchronous localization of sniper shots. An embedded detection and classification algorithm is implemented on wireless acoustic modules that send the relevant information to a central PC. Data fusion is then processed and the

  7. Relation between the extended time-delayed feedback control algorithm and the method of harmonic oscillators.

    PubMed

    Pyragas, Viktoras; Pyragas, Kestutis

    2015-08-01

    In a recent paper [Phys. Rev. E 91, 012920 (2015)] Olyaei and Wu have proposed a new chaos control method in which a target periodic orbit is approximated by a system of harmonic oscillators. We consider an application of such a controller to single-input single-output systems in the limit of an infinite number of oscillators. By evaluating the transfer function in this limit, we show that this controller transforms into the known extended time-delayed feedback controller. This finding gives rise to an approximate finite-dimensional theory of the extended time-delayed feedback control algorithm, which provides a simple method for estimating the leading Floquet exponents of controlled orbits. Numerical demonstrations are presented for the chaotic Rössler, Duffing, and Lorenz systems as well as the normal form of the Hopf bifurcation.

  8. Relation between the extended time-delayed feedback control algorithm and the method of harmonic oscillators

    NASA Astrophysics Data System (ADS)

    Pyragas, Viktoras; Pyragas, Kestutis

    2015-08-01

    In a recent paper [Phys. Rev. E 91, 012920 (2015), 10.1103/PhysRevE.91.012920] Olyaei and Wu have proposed a new chaos control method in which a target periodic orbit is approximated by a system of harmonic oscillators. We consider an application of such a controller to single-input single-output systems in the limit of an infinite number of oscillators. By evaluating the transfer function in this limit, we show that this controller transforms into the known extended time-delayed feedback controller. This finding gives rise to an approximate finite-dimensional theory of the extended time-delayed feedback control algorithm, which provides a simple method for estimating the leading Floquet exponents of controlled orbits. Numerical demonstrations are presented for the chaotic Rössler, Duffing, and Lorenz systems as well as the normal form of the Hopf bifurcation.

  9. Statistical-Mechanical Analysis of LMS Algorithm for Time-Varying Unknown System

    NASA Astrophysics Data System (ADS)

    Ishibushi, Norihiro; Kajikawa, Yoshinobu; Miyoshi, Seiji

    2017-02-01

    We analyze the behaviors of the least-mean-square algorithm for a time-varying unknown system using a statistical-mechanical method. Cross-correlations between the elements of a primary path and those of an adaptive filter and autocorrelations of the elements of the adaptive filter are treated as macroscopic variables. We obtain simultaneous differential equations that describe the dynamical behaviors of the macroscopic variables under conditions in which the tapped delay line is sufficiently long. We analytically show the existence of an optimal step size. This result is supporting evidence of Widrow et al.'s pioneering work that clarified the trade-off between the noise misadjustment and the lag misadjustment. Furthermore, we obtain the exact solution of the optimal step size in the case of a white reference signal. The derived theory includes the behaviors for a time-constant unknown system as a special case.

  10. A Real-Time and Closed-Loop Control Algorithm for Cascaded Multilevel Inverter Based on Artificial Neural Network

    PubMed Central

    Wang, Libing; Mao, Chengxiong; Wang, Dan; Lu, Jiming; Zhang, Junfeng; Chen, Xun

    2014-01-01

    In order to control the cascaded H-bridges (CHB) converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN) for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC) algorithm is employed to minimize the total harmonic distortion (THD) and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC) sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current's THD (<5%) when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness. PMID:24772025

  11. A real-time and closed-loop control algorithm for cascaded multilevel inverter based on artificial neural network.

    PubMed

    Wang, Libing; Mao, Chengxiong; Wang, Dan; Lu, Jiming; Zhang, Junfeng; Chen, Xun

    2014-01-01

    In order to control the cascaded H-bridges (CHB) converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN) for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC) algorithm is employed to minimize the total harmonic distortion (THD) and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC) sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current's THD (<5%) when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness.

  12. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms

    PubMed Central

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.

    2015-01-01

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406

  13. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms.

    PubMed

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M K

    2015-06-11

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution.

  14. A Note on Inconsistent Axioms in Rushby's Systematic Formal Verification for Fault-Tolerant Time-Triggered Algorithms

    NASA Technical Reports Server (NTRS)

    Pike, Lee

    2005-01-01

    I describe some inconsistencies in John Rushby s axiomatization of time-triggered algorithms that he presents in these transactions and that he formally specifies and verifies in a mechanical theorem-prover. I also present corrections for these inconsistencies.

  15. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    DOE PAGES

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; ...

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less

  16. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    SciTech Connect

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erences in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.

  17. Ultrasonic bone localization algorithm based on time-series cumulative kurtosis.

    PubMed

    Robles, Guillermo; Fresno, José Manuel; Giannetti, Romano

    2017-01-01

    The design and optimization of protective equipment and devices such as exoskeletons and prosthetics have the potential to be enhanced by the ability of accurately measure the positions of the bones during movement. Existing technologies allow a quite precise measurement of motion-mainly by using coordinate video-cameras and skin-mounted markers-but fail in directly measuring the bone position. Alternative approaches, as fluoroscopy, are too invasive and not usable during extended lapses of time, either for cost or radiation exposure. An approach to solve the problem is to combine the skin-glued markers with ultrasound technology in order to obtain the bone position by measuring at the same time the marker coordinates in 3D space and the depth of the echo from the bone. Given the complex structure of the bones and the tissues, the echoes from the ultrasound transducer show a quite complex structure as well. To reach a good accuracy in determining the depth of the bones, it is of paramount importance the ability to measure the time-of-flight (TOF) of the pulse with a high level of confidence. In this paper, the performance of several methods for determining the TOF of the ultrasound pulse has been evaluated when they are applied to the problem of measuring the bone depth. Experiments have been made using both simple setups used for calibration purposes and in real human tissues to test the performance of the algorithms. The results show that the method used to process the data to evaluate the time-of-flight of the echo signal can significantly affect the value of the depth measurement, especially in the cases when the verticality of the sensor with respect to the surface causing the main echo cannot be guaranteed. Finally, after testing several methods and processing algorithms for both accuracy and repeatability, the proposed cumulative kurtosis algorithm was found to be the most appropriate in the case of measuring bone depths in vivo with ultrasound sensors at

  18. Application of Genetic Algorithm to Predict Optimal Sowing Region and Timing for Kentucky Bluegrass in China

    PubMed Central

    Peng, Tingting; Jiang, Bo; Guo, Jiangfeng; Lu, Hongfei; Du, Liqun

    2015-01-01

    Temperature is a predominant environmental factor affecting grass germination and distribution. Various thermal-germination models for prediction of grass seed germination have been reported, in which the relationship between temperature and germination were defined with kernel functions, such as quadratic or quintic function. However, their prediction accuracies warrant further improvements. The purpose of this study is to evaluate the relative prediction accuracies of genetic algorithm (GA) models, which are automatically parameterized with observed germination data. The seeds of five P. pratensis (Kentucky bluegrass, KB) cultivars were germinated under 36 day/night temperature regimes ranging from 5/5 to 40/40°C with 5°C increments. Results showed that optimal germination percentages of all five tested KB cultivars were observed under a fluctuating temperature regime of 20/25°C. Meanwhile, the constant temperature regimes (e.g., 5/5, 10/10, 15/15°C, etc.) suppressed the germination of all five cultivars. Furthermore, the back propagation artificial neural network (BP-ANN) algorithm was integrated to optimize temperature-germination response models from these observed germination data. It was found that integrations of GA-BP-ANN (back propagation aided genetic algorithm artificial neural network) significantly reduced the Root Mean Square Error (RMSE) values from 0.21~0.23 to 0.02~0.09. In an effort to provide a more reliable prediction of optimum sowing time for the tested KB cultivars in various regions in the country, the optimized GA-BP-ANN models were applied to map spatial and temporal germination percentages of blue grass cultivars in China. Our results demonstrate that the GA-BP-ANN model is a convenient and reliable option for constructing thermal-germination response models since it automates model parameterization and has excellent prediction accuracy. PMID:26154163

  19. AN ALGORITHM FOR RADIATION MAGNETOHYDRODYNAMICS BASED ON SOLVING THE TIME-DEPENDENT TRANSFER EQUATION

    SciTech Connect

    Jiang, Yan-Fei; Stone, James M.; Davis, Shane W.

    2014-07-01

    We describe a new algorithm for solving the coupled frequency-integrated transfer equation and the equations of magnetohydrodynamics in the regime that light-crossing time is only marginally shorter than dynamical timescales. The transfer equation is solved in the mixed frame, including velocity-dependent source terms accurate to O(v/c). An operator split approach is used to compute the specific intensity along discrete rays, with upwind monotonic interpolation used along each ray to update the transport terms, and implicit methods used to compute the scattering and absorption source terms. Conservative differencing is used for the transport terms, which ensures the specific intensity (as well as energy and momentum) are conserved along each ray to round-off error. The use of implicit methods for the source terms ensures the method is stable even if the source terms are very stiff. To couple the solution of the transfer equation to the MHD algorithms in the ATHENA code, we perform direct quadrature of the specific intensity over angles to compute the energy and momentum source terms. We present the results of a variety of tests of the method, such as calculating the structure of a non-LTE atmosphere, an advective diffusion test, linear wave convergence tests, and the well-known shadow test. We use new semi-analytic solutions for radiation modified shocks to demonstrate the ability of our algorithm to capture the effects of an anisotropic radiation field accurately. Since the method uses explicit differencing of the spatial operators, it shows excellent weak scaling on parallel computers.

  20. Speech Planning Happens before Speech Execution: Online Reaction Time Methods in the Study of Apraxia of Speech

    ERIC Educational Resources Information Center

    Maas, Edwin; Mailend, Marja-Liisa

    2012-01-01

    Purpose: The purpose of this article is to present an argument for the use of online reaction time (RT) methods to the study of apraxia of speech (AOS) and to review the existing small literature in this area and the contributions it has made to our fundamental understanding of speech planning (deficits) in AOS. Method: Following a brief…

  1. LateBiclustering: Efficient Heuristic Algorithm for Time-Lagged Bicluster Identification.

    PubMed

    Gonçalves, Joana P; Madeira, Sara C

    2014-01-01

    Identifying patterns in temporal data is key to uncover meaningful relationships in diverse domains, from stock trading to social interactions. Also of great interest are clinical and biological applications, namely monitoring patient response to treatment or characterizing activity at the molecular level. In biology, researchers seek to gain insight into gene functions and dynamics of biological processes, as well as potential perturbations of these leading to disease, through the study of patterns emerging from gene expression time series. Clustering can group genes exhibiting similar expression profiles, but focuses on global patterns denoting rather broad, unspecific responses. Biclustering reveals local patterns, which more naturally capture the intricate collaboration between biological players, particularly under a temporal setting. Despite the general biclustering formulation being NP-hard, considering specific properties of time series has led to efficient solutions for the discovery of temporally aligned patterns. Notably, the identification of biclusters with time-lagged patterns, suggestive of transcriptional cascades, remains a challenge due to the combinatorial explosion of delayed occurrences. Herein, we propose LateBiclustering, a sensible heuristic algorithm enabling a polynomial rather than exponential time solution for the problem. We show that it identifies meaningful time-lagged biclusters relevant to the response of Saccharomyces cerevisiae to heat stress.

  2. The development of a near-real time hail damage swath identification algorithm for vegetation

    NASA Astrophysics Data System (ADS)

    Bell, Jordan R.

    The central United States is primarily covered in agricultural lands with a growing season that peaks during the same time as the region's climatological maximum for severe weather. These severe thunderstorms can bring large hail that can cause extensive areas of crop damage, which can be difficult to survey from the ground. Satellite remote sensing can help with the identification of these damaged areas. This study examined three techniques for identifying damage using satellite imagery that could be used in the development of a near-real time algorithm formulated for the detection of damage to agriculture caused by hail. The three techniques: a short term Normalized Difference Vegetation Index (NDVI) change product, a modified Vegetation Health Index (mVHI) that incorporates both NDVI and land surface temperature (LST), and a feature detection technique based on NDVI and LST anomalies were tested on a single training case and five case studies. Skill scores were computed for each of the techniques during the training case and each case study. Among the best-performing case studies, the probability of detection (POD) for the techniques ranged from 0.527 - 0.742. Greater skill was noted for environments that occurred later in the growing season over areas where the land cover was consistently one or two types of uniform vegetation. The techniques struggled in environments where the land cover was not able to provide uniform vegetation, resulting in POD of 0.067 - 0.223. The feature detection technique was selected to be used for the near-real-time algorithm, based on the consistent performance throughout the entire growing season.

  3. Decomposition Algorithm for Global Reachability on a Time-Varying Graph

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2010-01-01

    A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.

  4. An asymptotic-preserving Lagrangian algorithm for the time-dependent anisotropic heat transport equation

    SciTech Connect

    Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.

    2014-09-01

    We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while the second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X /X becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L2/X1L2 → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.

  5. Robustness of continuous-time adaptive control algorithms in the presence of unmodeled dynamics

    NASA Technical Reports Server (NTRS)

    Rohrs, C. E.; Valavani, L.; Athans, M.; Stein, G.

    1985-01-01

    This paper examines the robustness properties of existing adaptive control algorithms to unmodeled plant high-frequency dynamics and unmeasurable output disturbances. It is demonstrated that there exist two infinite-gain operators in the nonlinear dynamic system which determines the time-evolution of output and parameter errors. The pragmatic implications of the existence of such infinite-gain operators is that: (1) sinusoidal reference inputs at specific frequencies and/or (2) sinusoidal output disturbances at any frequency (including dc), can cause the loop gain to increase without bound, thereby exciting the unmodeled high-frequency dynamics, and yielding an unstable control system. Hence, it is concluded that existing adaptive control algorithms as they are presented in the literature referenced in this paper, cannot be used with confidence in practical designs where the plant contains unmodeled dynamics because instability is likely to result. Further understanding is required to ascertain how the currently implemented adaptive systems differ from the theoretical systems studied here and how further theoretical development can improve the robustness of adaptive controllers.

  6. A new time dependent density functional algorithm for large systems and plasmons in metal clusters

    SciTech Connect

    Baseggio, Oscar; Fronzoni, Giovanna; Stener, Mauro

    2015-07-14

    A new algorithm to solve the Time Dependent Density Functional Theory (TDDFT) equations in the space of the density fitting auxiliary basis set has been developed and implemented. The method extracts the spectrum from the imaginary part of the polarizability at any given photon energy, avoiding the bottleneck of Davidson diagonalization. The original idea which made the present scheme very efficient consists in the simplification of the double sum over occupied-virtual pairs in the definition of the dielectric susceptibility, allowing an easy calculation of such matrix as a linear combination of constant matrices with photon energy dependent coefficients. The method has been applied to very different systems in nature and size (from H{sub 2} to [Au{sub 147}]{sup −}). In all cases, the maximum deviations found for the excitation energies with respect to the Amsterdam density functional code are below 0.2 eV. The new algorithm has the merit not only to calculate the spectrum at whichever photon energy but also to allow a deep analysis of the results, in terms of transition contribution maps, Jacob plasmon scaling factor, and induced density analysis, which have been all implemented.

  7. Refinement and evaluation of helicopter real-time self-adaptive active vibration controller algorithms

    NASA Technical Reports Server (NTRS)

    Davis, M. W.

    1984-01-01

    A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.

  8. Generalized Framework and Algorithms for Illustrative Visualization of Time-Varying Data on Unstructured Meshes

    SciTech Connect

    Alexander S. Rattner; Donna Post Guillen; Alark Joshi

    2012-12-01

    Photo- and physically-realistic techniques are often insufficient for visualization of simulation results, especially for 3D and time-varying datasets. Substantial research efforts have been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. While these efforts have yielded valuable visualization results, a great deal of work has been reproduced in studies as individual research groups often develop purpose-built platforms. Additionally, interoperability between illustrative visualization software is limited due to specialized processing and rendering architectures employed in different studies. In this investigation, a generalized framework for illustrative visualization is proposed, and implemented in marmotViz, a ParaView plugin, enabling its use on variety of computing platforms with various data file formats and mesh geometries. Detailed descriptions of the region-of-interest identification and feature-tracking algorithms incorporated into this tool are provided. Additionally, implementations of multiple illustrative effect algorithms are presented to demonstrate the use and flexibility of this framework. By providing a framework and useful underlying functionality, the marmotViz tool can act as a springboard for future research in the field of illustrative visualization.

  9. A new time dependent density functional algorithm for large systems and plasmons in metal clusters

    NASA Astrophysics Data System (ADS)

    Baseggio, Oscar; Fronzoni, Giovanna; Stener, Mauro

    2015-07-01

    A new algorithm to solve the Time Dependent Density Functional Theory (TDDFT) equations in the space of the density fitting auxiliary basis set has been developed and implemented. The method extracts the spectrum from the imaginary part of the polarizability at any given photon energy, avoiding the bottleneck of Davidson diagonalization. The original idea which made the present scheme very efficient consists in the simplification of the double sum over occupied-virtual pairs in the definition of the dielectric susceptibility, allowing an easy calculation of such matrix as a linear combination of constant matrices with photon energy dependent coefficients. The method has been applied to very different systems in nature and size (from H2 to [Au147]-). In all cases, the maximum deviations found for the excitation energies with respect to the Amsterdam density functional code are below 0.2 eV. The new algorithm has the merit not only to calculate the spectrum at whichever photon energy but also to allow a deep analysis of the results, in terms of transition contribution maps, Jacob plasmon scaling factor, and induced density analysis, which have been all implemented.

  10. Framework and algorithms for illustrative visualizations of time-varying flows on unstructured meshes

    DOE PAGES

    Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; ...

    2016-03-17

    Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaViewmore » plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.« less

  11. Framework and algorithms for illustrative visualizations of time-varying flows on unstructured meshes

    SciTech Connect

    Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; Garimella, Srinivas

    2016-03-17

    Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaView plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.

  12. Development of a Detection Algorithm for Use with Reflectance-Based, Real-Time Chemical Sensing

    PubMed Central

    Malanoski, Anthony P.; Johnson, Brandy J.; Erickson, Jeffrey S.; Stenger, David A.

    2016-01-01

    Here, we describe our efforts focused on development of an algorithm for identification of detection events in a real-time sensing application relying on reporting of color values using commercially available color sensing chips. The effort focuses on the identification of event occurrence, rather than target identification, and utilizes approaches suitable to onboard device incorporation to facilitate portable and autonomous use. The described algorithm first excludes electronic noise generated by the sensor system and determines response thresholds. This automatic adjustment provides the potential for use with device variations as well as accommodating differing indicator behaviors. Multiple signal channels (RGB) as well as multiple indicator array elements are combined for reporting of an event with a minimum of false responses. While the method reported was developed for use with paper-supported porphyrin and metalloporphyrin indicators, it should be equally applicable to other colorimetric indicators. Depending on device configurations, receiver operating characteristic (ROC) sensitivities of 1 could be obtained with specificities of 0.87 (threshold 160 ppb, ethanol). PMID:27854335

  13. A generic probability based algorithm to derive regional patterns of crops in time and space

    NASA Astrophysics Data System (ADS)

    Wattenbach, Martin; Oijen, Marcel v.; Leip, Adrian; Hutchings, Nick; Balkovic, Juraj; Smith, Pete

    2013-04-01

    Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy partitioning, influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. However, at a given point of time the pattern of crops in a landscape is not only determined by environmental and socioeconomic conditions but also by the compatibility to the crops which had been grown in the years before at the current field and its surrounding cropping area. The crop compatibility is driven by factors like pests and diseases, crop driven changes in soil structure and timing of cultivation steps. Given these effects of crops on the biochemical cycle and their interdependence with the mentioned boundary conditions, there is a demand in the regional and global modelling community to account for these regional patterns. Here we present a Bayesian crop distribution generator algorithm that is used to calculate the combined and conditional probability for a crop to appear in time and space using sparse and disparate information. The input information to define the most probable crop per year and grid cell is based on combined probabilities derived from the a crop transition matrix representing good agricultural practice, crop specific soil suitability derived from the European soil database and statistical information about harvested area from the Eurostat database. The reported Eurostat crop area also provides the target proportion to be matched by the algorithm on the level of administrative units (Nomenclature des Unités Territoriales Statistiques - NUTS). The algorithm is applied for the EU27 to derive regional spatial and

  14. Real-time dynamics simulation of the Cassini spacecraft using DARTS. Part 1: Functional capabilities and the spatial algebra algorithm

    NASA Technical Reports Server (NTRS)

    Jain, A.; Man, G. K.

    1993-01-01

    This paper describes the Dynamics Algorithms for Real-Time Simulation (DARTS) real-time hardware-in-the-loop dynamics simulator for the National Aeronautics and Space Administration's Cassini spacecraft. The spacecraft model consists of a central flexible body with a number of articulated rigid-body appendages. The demanding performance requirements from the spacecraft control system require the use of a high fidelity simulator for control system design and testing. The DARTS algorithm provides a new algorithmic and hardware approach to the solution of this hardware-in-the-loop simulation problem. It is based upon the efficient spatial algebra dynamics for flexible multibody systems. A parallel and vectorized version of this algorithm is implemented on a low-cost, multiprocessor computer to meet the simulation timing requirements.

  15. Algorithm Diversity for Resilent Systems

    DTIC Science & Technology

    2016-06-27

    4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Algorithm Diversity for Resilent Systems N/A 5b. GRANT NUMBER NOOO 141512208 5c. PROGRAM ELEMENT NUMBER...changes to a prograrn’s state during execution. Specifically, the project aims to develop techniques to introduce algorithm -level diversity, in contrast...to existing work on execution-level diversity. Algorithm -level diversity can introduce larger differences between variants than execution-level

  16. Time-domain filtered-x-Newton narrowband algorithms for active isolation of frequency-fluctuating vibration

    NASA Astrophysics Data System (ADS)

    Li, Yan; He, Lin; Shuai, Chang-geng; Wang, Fei

    2016-04-01

    A time-domain filtered-x Newton narrowband algorithm (the Fx-Newton algorithm) is proposed to address three major problems in active isolation of machinery vibration: multiple narrowband components, MIMO coupling, and amplitude and frequency fluctuations. In this algorithm, narrowband components are extracted by narrowband-pass filters (NBPF) and independently controlled by multi-controllers, and fast convergence of the control algorithm is achieved by inverse secondary-path filtering of the extracted sinusoidal reference signal and its orthogonal component using L×L numbers of 2nd-order filters in the time domain. Controller adapting and control signal generation are also implemented in the time domain, to ensure good real-time performance. The phase shift caused by narrowband filter is compensated online to improve the robustness of control system to frequency fluctuations. A double-reference Fx-Newton algorithm is also proposed to control double sinusoids in the same frequency band, under the precondition of acquiring two independent reference signals. Experiments are conducted with an MIMO single-deck vibration isolation system on which a 200 kW ship diesel generator is mounted, and the algorithms are tested under the vibration alternately excited by the diesel generator and inertial shakers. The results of control over sinusoidal vibration excited by inertial shakers suggest that the Fx-Newton algorithm with NBPF have much faster convergence rate and better attenuation effect than the Fx-LMS algorithm. For swept, frequency-jumping, double, double frequency-swept and double frequency-jumping sinusoidal vibration, and multiple high-level harmonics in broadband vibration excited by the diesel generator, the proposed algorithms also demonstrate large vibration suppression at fast convergence rate, and good robustness to vibration with frequency fluctuations.

  17. A new real time tsunami detection algorithm for bottom pressure measurements in open ocean: characterization and benchmarks

    NASA Astrophysics Data System (ADS)

    Embriaco, D.; Chierici, F.; Pignagnoli, L.

    2009-04-01

    In the last decades the use of the Bottom Pressure Recorder (BPR) in a deep ocean environment for tsunami detection has had a relevant development. A key role for an early warning system based on BPRs is played by the tsunami detection algorithms running in real time on the BPR itself or at installation site. We present a new algorithm for tsunami detection that is based on real time pressure data analysis, consisting in tide removing, spike removing, low pass filtering and linear prediction: the output is then matched against a given pressure threshold allowing the detection of anomalous events. Different configurations of the algorithm, consisting for instance in a real time band pass filtering of the pressure signal in place of linear prediction, are also tested for comparison. The algorithm is designed to be used in an autonomous early warning system, with a finite set of input parameters that can be reconfigured in real time. A realistic benchmark scheme is developed in order to characterize the algorithm features with particular regards to false alarm probability, sensitivity to the amplitude and wavelength of the tsunami and detection earliness. The algorithm behaviour in real operation is numerically estimated performing statistical simulations where a large number of synthetic tsunami waves with various amplitude, period, shape and phase is generated and superimposed to time series of real pressure data recorded in different environmental conditions and locations.

  18. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  19. A New MANET Wormhole Detection Algorithm Based on Traversal Time and Hop Count Analysis

    PubMed Central

    Karlsson, Jonny; Dooley, Laurence S.; Pulkkis, Göran

    2011-01-01

    As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability. PMID:22247657

  20. A New MANET wormhole detection algorithm based on traversal time and hop count analysis.

    PubMed

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2011-01-01

    As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability.

  1. Program for the analysis of time series. [by means of fast Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Brown, T. J.; Brown, C. G.; Hardin, J. C.

    1974-01-01

    A digital computer program for the Fourier analysis of discrete time data is described. The program was designed to handle multiple channels of digitized data on general purpose computer systems. It is written, primarily, in a version of FORTRAN 2 currently in use on CDC 6000 series computers. Some small portions are written in CDC COMPASS, an assembler level code. However, functional descriptions of these portions are provided so that the program may be adapted for use on any facility possessing a FORTRAN compiler and random-access capability. Properly formatted digital data are windowed and analyzed by means of a fast Fourier transform algorithm to generate the following functions: (1) auto and/or cross power spectra, (2) autocorrelations and/or cross correlations, (3) Fourier coefficients, (4) coherence functions, (5) transfer functions, and (6) histograms.

  2. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  3. A time-accurate algorithm for chemical non-equilibrium viscous flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, J.-S.; Chen, K.-H.; Choi, Y.

    1992-01-01

    A time-accurate, coupled solution procedure is described for the chemical nonequilibrium Navier-Stokes equations over a wide range of Mach numbers. This method employs the strong conservation form of the governing equations, but uses primitive variables as unknowns. Real gas properties and equilibrium chemistry are considered. Numerical tests include steady convergent-divergent nozzle flows with air dissociation/recombination chemistry, dump combustor flows with n-pentane-air chemistry, nonreacting flow in a model double annular combustor, and nonreacting unsteady driven cavity flows. Numerical results for both the steady and unsteady flows demonstrate the efficiency and robustness of the present algorithm for Mach numbers ranging from the incompressible limit to supersonic speeds.

  4. A real-time focused SAR algorithm on the Jetson TK1 board

    NASA Astrophysics Data System (ADS)

    Radecki, K.; Samczynski, P.; Kulpa, K.; Drozdowicz, J.

    2016-10-01

    In this paper the authors present a solution based on a small and lightweight computing platform equipped with a graphics processing unit (GPU) which allows the possibility of performing a real-time fully focused SAR algorithm. The presented system is dedicated for airborne SAR applications including small SAR systems dedicated for medium-sized unmanned aerial vehicle (UAV) platforms. The proposed solution also reduces the need for a storage system. In the paper real SAR results obtained using a Frequency Modulation Continuous Wave (FMCW) radar demonstrator operating at 35 GHz carrier frequency with 1GHz bandwidth are presented. As a radar carrier an airborne platform was used. The presented SAR radar demonstrator was developed by the Warsaw University of Technology in cooperation with the Air Force Institute of Technology, Warsaw, Poland.

  5. Stabilization and PID tuning algorithms for second-order unstable processes with time-delays.

    PubMed

    Seer, Qiu Han; Nandong, Jobrun

    2017-03-01

    Open-loop unstable systems with time-delays are often encountered in process industry, which are often more difficult to control than stable processes. In this paper, the stabilization by PID controller of second-order unstable processes, which can be represented as second-order deadtime with an unstable pole (SODUP) and second-order deadtime with two unstable poles (SODTUP), is performed via the necessary and sufficient criteria of Routh-Hurwitz stability analysis. The stability analysis provides improved understanding on the existence of a stabilizing range of each PID parameter. Three simple PID tuning algorithms are proposed to provide desired closed-loop performance-robustness within the stable regions of controller parameters obtained via the stability analysis. The proposed PID controllers show improved performance over those derived via some existing methods.

  6. Comparison and calibration of a real-time virtual stenting algorithm using Finite Element Analysis and Genetic Algorithms.

    PubMed

    Spranger, K; Capelli, C; Bosi, G M; Schievano, S; Ventikos, Y

    2015-08-15

    In this paper, we perform a comparative analysis between two computational methods for virtual stent deployment: a novel fast virtual stenting method, which is based on a spring-mass model, is compared with detailed finite element analysis in a sequence of in silico experiments. Given the results of the initial comparison, we present a way to optimise the fast method by calibrating a set of parameters with the help of a genetic algorithm, which utilises the outcomes of the finite element analysis as a learning reference. As a result of the calibration phase, we were able to substantially reduce the force measure discrepancy between the two methods and validate the fast stenting method by assessing the differences in the final device configurations.

  7. A Global R&D Program on Liquid Ar Time Projection Chambers Under Execution at the University of Bern

    NASA Astrophysics Data System (ADS)

    Badhrees, I.; Ereditato, A.; Janos, S.; Kreslo, I.; Messina, M.; Haug, S.; Rossi, B.; Rohr, C. Rudolf von; Weber, M.; Zeller, M.

    A comprehensive R&D program on LAr Time Projection Chambers (LAr TPC) is presently being carried out at the University of Bern. Many aspects of this technology are under investigation: HV, purity, calibration, readout, etc. Furthermore, multi-photon interaction of UV-laser beams with LAr has successfully been measured. Possible applications of the LAr TPC technology in the field of homeland security are also being studied. In this paper, the main aspects of the program will be reviewed and the achievements underlined. Emphasis will be given to the largest device in Bern, i.e. the 5 m long ARGONTUBE TPC, meant to prove the feasibility of very long drifts in view of future large scale applications of the technique.

  8. Computer-mediated communication and time pressure induce higher cardiovascular responses in the preparatory and execution phases of cooperative tasks.

    PubMed

    Costa Ferrer, Raquel; Serrano Rosa, Miguel Ángel; Zornoza Abad, Ana; Salvador Fernández-Montejo, Alicia

    2010-11-01

    The cardiovascular (CV) response to social challenge and stress is associated with the etiology of cardiovascular diseases. New ways of communication, time pressure and different types of information are common in our society. In this study, the cardiovascular response to two different tasks (open vs. closed information) was examined employing different communication channels (computer-mediated vs. face-to-face) and with different pace control (self vs. external). Our results indicate that there was a higher CV response in the computer-mediated condition, on the closed information task and in the externally paced condition. These role of these factors should be considered when studying the consequences of social stress and their underlying mechanisms.

  9. The effect of on/off indicator design on state confusion, preference, and response time performance, executive summary

    NASA Technical Reports Server (NTRS)

    Donner, Kimberly A.; Holden, Kritina L.; Manahan, Meera K.

    1991-01-01

    Investigated are five designs of software-based ON/OFF indicators in a hypothetical Space Station Power System monitoring task. The hardware equivalent of the indicators used in the present study is the traditional indicator light that illuminates an ON label or an OFF label. Coding methods used to represent the active state were reverse video, color, frame, check, or reverse video with check. Display background color was also varied. Subjects made judgments concerning the state of indicators that resulted in very low error rates and high percentages of agreement across indicator designs. Response time measures for each of the five indicator designs did not differ significantly, although subjects reported that color was the best communicator. The impact of these results on indicator design is discussed.

  10. A new finite element formulation for computational fluid dynamics. IX - Fourier analysis of space-time Galerkin/least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Shakib, Farzin; Hughes, Thomas J. R.

    1991-01-01

    A Fourier stability and accuracy analysis of the space-time Galerkin/least-squares method as applied to a time-dependent advective-diffusive model problem is presented. Two time discretizations are studied: a constant-in-time approximation and a linear-in-time approximation. Corresponding space-time predictor multi-corrector algorithms are also derived and studied. The behavior of the space-time algorithms is compared to algorithms based on semidiscrete formulations.

  11. A new real time tsunami detection algorithm for bottom pressure measurements in open ocean: characterization and benchmarks

    NASA Astrophysics Data System (ADS)

    Pignagnoli, Luca; Chierici, Francesco; Embriaco, Davide

    2010-05-01

    In the last decades the use of the Bottom Pressure Recorder (BPR) in a deep ocean environment for tsunami detection has had a relevant development. A key role for an early warning system based on BPRs is played by the tsunami detection algorithms running in real time on the BPR itself or on land. We present a new algorithm for tsunami detection based on real time pressure data analysis by a filtering cascade. This procedure consists of a tide removing, spike removing, low pass filtering and linear prediction or band pass filtering; the output filtered data is then matched against a given pressure threshold. Once exceeded a parent tsunami signal is detected. The main characteristics of the algorithm is its site specific adaptability and its flexibility that greatly enhance the detection reliability. In particular it was shown that removing the predicted tide strongly reduces the dynamical range of the pressure time series, allowing the detection of small tsunami signal. The algorithm can also be applied to the data acquired by a tide gauge. The algorithm is particularly designed and optimized to be used in an autonomous early warning system. A statistical method for algorithms evaluation has been developed in order to characterize the algorithms features with particular regards to false alarm probability, detection probability and detection earliness. Different configurations of the algorithm are tested for comparison using both synthetic and real pressure data set recorded in different environmental conditions and locations. The algorithm was installed onboard of the GEOSTAR abyssal station, deployed at 3264 m depth in the Gulf of Cadiz and successfully operated for 1 year, from August 2007 to August 2008.

  12. Time-based and event-based prospective memory in autism spectrum disorder: the roles of executive function and theory of mind, and time-estimation.

    PubMed

    Williams, David; Boucher, Jill; Lind, Sophie; Jarrold, Christopher

    2013-07-01

    Prospective memory (remembering to carry out an action in the future) has been studied relatively little in ASD. We explored time-based (carry out an action at a pre-specified time) and event-based (carry out an action upon the occurrence of a pre-specified event) prospective memory, as well as possible cognitive correlates, among 21 intellectually high-functioning children with ASD, and 21 age- and IQ-matched neurotypical comparison children. We found impaired time-based, but undiminished event-based, prospective memory among children with ASD. In the ASD group, time-based prospective memory performance was associated significantly with diminished theory of mind, but not with diminished cognitive flexibility. There was no evidence that time-estimation ability contributed to time-based prospective memory impairment in ASD.

  13. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  14. Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems.

    PubMed

    Liu, Derong; Wei, Qinglai

    2014-03-01

    This paper is concerned with a new discrete-time policy iteration adaptive dynamic programming (ADP) method for solving the infinite horizon optimal control problem of nonlinear systems. The idea is to use an iterative ADP technique to obtain the iterative control law, which optimizes the iterative performance index function. The main contribution of this paper is to analyze the convergence and stability properties of policy iteration method for discrete-time nonlinear systems for the first time. It shows that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control law, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, the numerical results and analysis are presented to illustrate the performance of the developed method.

  15. Polynomial-time quantum algorithm for the simulation of chemical dynamics.

    PubMed

    Kassal, Ivan; Jordan, Stephen P; Love, Peter J; Mohseni, Masoud; Aspuru-Guzik, Alán

    2008-12-02

    The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born-Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits.

  16. Robust evaluation of time series classification algorithms for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.

    2014-03-01

    Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.

  17. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  18. A videofluoroscopy-based tracking algorithm for quantifying the time course of human intervertebral displacements.

    PubMed

    Balkovec, Christian; Veldhuis, Jim H; Baird, John W; Brodland, G Wayne; McGill, Stuart M

    2017-03-15

    The motions of individual intervertebral joints can affect spine motion, injury risk, deterioration, pain, treatment strategies, and clinical outcomes. Since standard kinematic methods do not provide precise time-course details about individual vertebrae and intervertebral motions, information that could be useful for scientific advancement and clinical assessment, we developed an iterative template matching algorithm to obtain this data from videofluoroscopy images. To assess the bias of our approach, vertebrae in an intact porcine spine were tracked and compared to the motions of high-contrast markers. To estimate precision under clinical conditions, motions of three human cervical spines were tracked independently ten times and vertebral and intervertebral motions associated with individual trials were compared to corresponding averages. Both tests produced errors in intervertebral angular and shear displacements no greater than 0.4° and 0.055 mm, respectively. When applied to two patient cases, aberrant intervertebral motions in the cervical spine were typically found to correlate with patient-specific anatomical features such as disc height loss and osteophytes. The case studies suggest that intervertebral kinematic time-course data could have value in clinical assessments, lead to broader understanding of how specific anatomical features influence joint motions, and in due course inform clinical treatments.

  19. Time-Based and Event-Based Prospective Memory in Autism Spectrum Disorder: The Roles of Executive Function and Theory of Mind, and Time-Estimation

    ERIC Educational Resources Information Center

    Williams, David; Boucher, Jill; Lind, Sophie; Jarrold, Christopher

    2013-01-01

    Prospective memory (remembering to carry out an action in the future) has been studied relatively little in ASD. We explored time-based (carry out an action at a pre-specified time) and event-based (carry out an action upon the occurrence of a pre-specified event) prospective memory, as well as possible cognitive correlates, among 21…

  20. An online algorithm for least-square spectral analysis: Applied to time-frequency analysis of heart rate.

    PubMed

    Zhang, Zhe; Leong, Philip H W

    2015-08-01

    We propose a novel online algorithm for computing least-square based periodograms, otherwise known as the Lomb-Scargle Periodogram. Our spectral analysis technique has been shown to be superior to traditional discrete Fourier transform (DFT) based methods, and we introduce an algorithm which has O(N) time complexity per input sample. The technique is suitable for real-time embedded implementations and its utility is demonstrated through an application to the high resolution time-frequency domain analysis of heart rate variability (HRV).

  1. Use of NTRIP for optimizing the decoding algorithm for real-time data streams.

    PubMed

    He, Zhanke; Tang, Wenda; Yang, Xuhai; Wang, Liming; Liu, Jihua

    2014-10-10

    As a network transmission protocol, Networked Transport of RTCM via Internet Protocol (NTRIP) is widely used in GPS and Global Orbiting Navigational Satellite System (GLONASS) Augmentation systems, such as Continuous Operational Reference System (CORS), Wide Area Augmentation System (WAAS) and Satellite Based Augmentation Systems (SBAS). With the deployment of BeiDou Navigation Satellite system(BDS) to serve the Asia-Pacific region, there are increasing needs for ground monitoring of the BeiDou Navigation Satellite system and the development of the high-precision real-time BeiDou products. This paper aims to optimize the decoding algorithm of NTRIP Client data streams and the user authentication strategies of the NTRIP Caster based on NTRIP. The proposed method greatly enhances the handling efficiency and significantly reduces the data transmission delay compared with the Federal Agency for Cartography and Geodesy (BKG) NTRIP. Meanwhile, a transcoding method is proposed to facilitate the data transformation from the BINary EXchange (BINEX) format to the RTCM format. The transformation scheme thus solves the problem of handing real-time data streams from Trimble receivers in the BeiDou Navigation Satellite System indigenously developed by China.

  2. Improved radar data processing algorithms for quantitative rainfall estimation in real time.

    PubMed

    Krämer, S; Verworn, H R

    2009-01-01

    This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time.

  3. Use of NTRIP for Optimizing the Decoding Algorithm for Real-Time Data Streams

    PubMed Central

    He, Zhanke; Tang, Wenda; Yang, Xuhai; Wang, Liming; Liu, Jihua

    2014-01-01

    As a network transmission protocol, Networked Transport of RTCM via Internet Protocol (NTRIP) is widely used in GPS and Global Orbiting Navigational Satellite System (GLONASS) Augmentation systems, such as Continuous Operational Reference System (CORS), Wide Area Augmentation System (WAAS) and Satellite Based Augmentation Systems (SBAS). With the deployment of BeiDou Navigation Satellite system (BDS) to serve the Asia-Pacific region, there are increasing needs for ground monitoring of the BeiDou Navigation Satellite system and the development of the high-precision real-time BeiDou products. This paper aims to optimize the decoding algorithm of NTRIP Client data streams and the user authentication strategies of the NTRIP Caster based on NTRIP. The proposed method greatly enhances the handling efficiency and significantly reduces the data transmission delay compared with the Federal Agency for Cartography and Geodesy (BKG) NTRIP. Meanwhile, a transcoding method is proposed to facilitate the data transformation from the BINary EXchange (BINEX) format to the RTCM format. The transformation scheme thus solves the problem of handing real-time data streams from Trimble receivers in the BeiDou Navigation Satellite System indigenously developed by China. PMID:25310474

  4. Real-Time Simulation for Verification and Validation of Diagnostic and Prognostic Algorithms

    NASA Technical Reports Server (NTRS)

    Aguilar, Robet; Luu, Chuong; Santi, Louis M.; Sowers, T. Shane

    2005-01-01

    To verify that a health management system (HMS) performs as expected, a virtual system simulation capability, including interaction with the associated platform or vehicle, very likely will need to be developed. The rationale for developing this capability is discussed and includes the limited capability to seed faults into the actual target system due to the risk of potential damage to high value hardware. The capability envisioned would accurately reproduce the propagation of a fault or failure as observed by sensors located at strategic locations on and around the target system and would also accurately reproduce the control system and vehicle response. In this way, HMS operation can be exercised over a broad range of conditions to verify that it meets requirements for accurate, timely response to actual faults with adequate margin against false and missed detections. An overview is also presented of a real-time rocket propulsion health management system laboratory which is available for future rocket engine programs. The health management elements and approaches of this lab are directly applicable for future space systems. In this paper the various components are discussed and the general fault detection, diagnosis, isolation and the response (FDIR) concept is presented. Additionally, the complexities of V&V (Verification and Validation) for advanced algorithms and the simulation capabilities required to meet the changing state-of-the-art in HMS are discussed.

  5. A polynomial-time algorithm for the matching of crossing contact-map patterns.

    PubMed

    Gramm, Jens

    2004-01-01

    Contact maps are a model to capture the core information in the structure of biological molecules, e.g., proteins. A contact map consists of an ordered set S of elements (representing a protein's sequence of amino acids), and a set A of element pairs of S, called arcs (representing amino acids which are closely neighbored in the structure). Given two contact maps (S, A) and (Sp, Ap) with /A/ > or = /Ap/, the CONTACT MAP PATTERN MATCHING (CMPM) problem asks whether the "pattern" (Sp, A,) "occurs" in (S, A), i.e., informally stated, whether there is a subset of /Ap/ arcs in A whose arc structure coincides with Ap. CMPM captures the biological question of finding structural motifs in protein structures. In general, CMPM is NP-hard. In this paper, we show that CMPM is solvable in O(/A/6/Ap/2) time when the pattern is {precedes, crosses}-structured, i.e., when each two arcs in the pattern are disjoint or crossing. Our algorithm extends to other closely related models. In particular, it answers an open question raised by Vialette that, rephrased in terms of contact maps, asked whether CMPM for {precedes, crosses}-structured patterns is NP-hard or solvable in polynomial time. Our result stands in sharp contrast to the NP-hardness of closely related problems. We provide experimental results which show that contact maps derived from real protein structures can be processed efficiently.

  6. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  7. A fast forward algorithm for real-time geosteering of azimuthal gamma-ray logging.

    PubMed

    Qin, Zhen; Pan, Heping; Wang, Zhonghao; Wang, Bintao; Huang, Ke; Liu, Shaohua; Li, Gang; Amara Konaté, Ahmed; Fang, Sinan

    2017-05-01

    Geosteering is an effective method to increase the reservoir drilling rate in horizontal wells. Based on the features of an azimuthal gamma-ray logging tool and strata spatial location, a fast forward calculation method of azimuthal gamma-ray logging is deduced by using the natural gamma ray distribution equation in formation. The response characteristics of azimuthal gamma-ray logging while drilling in the layered formation models with different thickness and position are simulated and summarized by using the method. The result indicates that the method calculates quickly, and when the tool nears a boundary, the method can be used to identify the boundary and determine the distance from the logging tool to the boundary in time. Additionally, the formation parameters of the algorithm in the field can be determined after a simple method is proposed based on the information of an offset well. Therefore, the forward method can be used for geosteering in the field. A field example validates that the forward method can be used to determine the distance from the azimuthal gamma-ray logging tool to the boundary for geosteering in real-time.

  8. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  9. Execution Time of Symmetric Eigensolvers

    DTIC Science & Technology

    1997-01-01

    recursive halving operation is a distributed sum in which each of the pc pro- cessors in the row starts with k values and end up with kpc sums. Updating...a real matrix. Technical Report 1574, Oak Ridge National Laboratory, 1954. [84] Gene H. Golub and Charles F. Van Loan. Matrix Computations. The Johns

  10. An optimized compression algorithm for real-time ECG data transmission in wireless network of medical information systems.

    PubMed

    Cho, Gyoun-Yon; Lee, Seo-Joon; Lee, Tae-Ro

    2015-01-01

    Recent medical information systems are striving towards real-time monitoring models to care patients anytime and anywhere through ECG signals. However, there are several limitations such as data distortion and limited bandwidth in wireless communications. In order to overcome such limitations, this research focuses on compression. Few researches have been made to develop a specialized compression algorithm for ECG data transmission in real-time monitoring wireless network. Not only that, recent researches' algorithm is not appropriate for ECG signals. Therefore this paper presents a more developed algorithm EDLZW for efficient ECG data transmission. Results actually showed that the EDLZW compression ratio was 8.66, which was a performance that was 4 times better than any other recent compression method widely used today.

  11. So much to do, so little time. To accomplish the mandatory initiatives of ARRA, healthcare organizations will require significant and thoughtful planning, prioritization and execution.

    PubMed

    Klein, Kimberly

    2010-01-01

    The American Recovery and Reinvestment Act of 2009 (ARRA) has set forth legislation for the healthcare community to achieve adoption of electronic health records (EHR), as well as form data standards, health information exchanges (HIE) and compliance with more stringent security and privacy controls under the HITECH Act. While the Office of the National Coordinator for Health Information Technology (ONCHIT) works on the definition of both "meaningful use" and "certification" of information technology systems, providers in particular must move forward with their IT initiatives to achieve the basic requirements for Medicare and Medicaid incentives starting in 2011, and avoid penalties that will reduce reimbursement beginning in 2015. In addition, providers, payors, government and non-government stakeholders will all have to balance the implementation of EHRs, working with HIEs, at the same time that they must upgrade their systems to be in compliance with ICD-10 and HIPAA 5010 code sets. Compliance deadlines for EHRs and HIEs begin in 2011, while ICD-10 diagnosis and procedure code sets compliance is required by October 2013 and HIPAA 5010 transaction sets, with one exception, is required by January 1, 2012. In order to accomplish these strategic and mandatory initiatives successfully and simultaneously, healthcare organizations will require significant and thoughtful planning, prioritization and execution.

  12. Thermosphere-ionosphere-mesosphere energetics and dynamics (TIMED). The TIMED mission and science program report of the science definition team. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A Science Definition Team was established in December 1990 by the Space Physics Division, NASA, to develop a satellite program to conduct research on the energetics, dynamics, and chemistry of the mesosphere and lower thermosphere/ionosphere. This two-volume publication describes the TIMED (Thermosphere-Ionosphere-Mesosphere, Energetics and Dynamics) mission and associated science program. The report outlines the scientific objectives of the mission, the program requirements, and the approach towards meeting these requirements.

  13. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  14. Novel Algorithms Enabling Rapid, Real-Time Earthquake Monitoring and Tsunami Early Warning Worldwide

    NASA Astrophysics Data System (ADS)

    Lomax, A.; Michelini, A.

    2012-12-01

    We have introduced recently new methods to determine rapidly the tsunami potential and magnitude of large earthquakes (e.g., Lomax and Michelini, 2009ab, 2011, 2012). To validate these methods we have implemented them along with other new algorithms within the Early-est earthquake monitor at INGV-Rome (http://early-est.rm.ingv.it, http://early-est.alomax.net). Early-est is a lightweight software package for real-time earthquake monitoring (including phase picking, phase association and event detection, location, magnitude determination, first-motion mechanism determination, ...), and for tsunami early warning based on discriminants for earthquake tsunami potential. In a simulation using archived broadband seismograms for the devastating M9, 2011 Tohoku earthquake and tsunami, Early-est determines: the epicenter within 3 min after the event origin time, discriminants showing very high tsunami potential within 5-7 min, and magnitude Mwpd(RT) 9.0-9.2 and a correct shallow-thrusting mechanism within 8 min. Real-time monitoring with Early-est givess similar results for most large earthquakes using currently available, real-time seismogram data. Here we summarize some of the key algorithms within Early-est that enable rapid, real-time earthquake monitoring and tsunami early warning worldwide: >>> FilterPicker - a general purpose, broad-band, phase detector and picker (http://alomax.net/FilterPicker); >>> Robust, simultaneous association and location using a probabilistic, global-search; >>> Period-duration discriminants TdT0 and TdT50Ex for tsunami potential available within 5 min; >>> Mwpd(RT) magnitude for very large earthquakes available within 10 min; >>> Waveform P polarities determined on broad-band displacement traces, focal mechanisms obtained with the HASH program (Hardebeck and Shearer, 2002); >>> SeisGramWeb - a portable-device ready seismogram viewer using web-services in a browser (http://alomax.net/webtools/sgweb/info.html). References (see also: http

  15. Executive Functioning in Schizophrenia

    PubMed Central

    Orellana, Gricel; Slachevsky, Andrea

    2013-01-01

    The executive function (EF) is a set of abilities, which allows us to invoke voluntary control of our behavioral responses. These functions enable human beings to develop and carry out plans, make up analogies, obey social rules, solve problems, adapt to unexpected circumstances, do many tasks simultaneously, and locate episodes in time and place. EF includes divided attention and sustained attention, working memory (WM), set-shifting, flexibility, planning, and the regulation of goal directed behavior and can be defined as a brain function underlying the human faculty to act or think not only in reaction to external events but also in relation with internal goals and states. EF is mostly associated with dorsolateral prefrontal cortex (PFC). Besides EF, PFC is involved in self-regulation of behavior, i.e., the ability to regulate behavior according to internal goals and constraints, particularly in less structured situations. Self-regulation of behavior is subtended by ventral medial/orbital PFC. Impairment of EF is one of the most commonly observed deficits in schizophrenia through the various disease stages. Impairment in tasks measuring conceptualization, planning, cognitive flexibility, verbal fluency, ability to solve complex problems, and WM occur in schizophrenia. Disorders detected by executive tests are consistent with evidence from functional neuroimaging, which have shown PFC dysfunction in patients while performing these kinds of tasks. Schizophrenics also exhibit deficit in odor identifying, decision-making, and self-regulation of behavior suggesting dysfunction of the orbital PFC. However, impairment in executive tests is explained by dysfunction of prefronto-striato-thalamic, prefronto-parietal, and prefronto-temporal neural networks mainly. Disorders in EFs may be considered central facts with respect to schizophrenia and it has been suggested that negative symptoms may be explained by that executive dysfunction. PMID:23805107

  16. What executives should remember.

    PubMed

    Drucker, Peter F

    2006-02-01

    In more than 30 essays for Harvard Business Review, Peter Drucker (1909-2005) urged readers to take on the hard work of thinking--always combined, he insisted, with decisive action. He closely analyzed the phenomenon of knowledge work--the growing call for employees who use their minds rather than their hands--and explained how it challenged the conventional wisdom about the way organizations should be run. He was intrigued by employees who knew more about certain subjects than their bosses or colleagues but who still had to cooperate with others in a large organization. As the business world matured in the second half of the twentieth century, executives came to think that they knew how to run companies--and Drucker took it upon himself to poke holes in their assumptions, lest organizations become stale. But he did so sympathetically, operating from the premise that his readers were intelligent, hardworking people of goodwill. Well suited to HBR's format of practical, idea-based essays for executives, his clear-eyed, humanistic writing enriched the magazine time and again. This article is a compilation of the savviest management advice Drucker offered HBR readers over the years--in short, his greatest hits. It revisits the following insightful, influential contributions: "The Theory of the Business" (September-October 1994), "Managing for Business Effectiveness" (May-June 1963), "What Business Can Learn from Nonprofits" (July-August 1989), "The New Society of Organizations" (September-October 1992), "The Information Executives Truly Need" (January-February 1995), "Managing Oneself" (March-April 1999 republished January 2005), "They're Not Employees, They're People" (February 2002), "What Makes an Effective Executive" (June 2004).

  17. APL simulation of Grover's algorithm

    NASA Astrophysics Data System (ADS)

    Lipovaca, Samir

    2012-02-01

    Grover's algorithm is a fast quantum search algorithm. Classically, to solve the search problem for a search space of size N we need approximately N operations. Grover's algorithm offers a quadratic speedup. Since present quantum computers are not robust enough for code writing and execution, to experiment with Grover's algorithm, we will simulate it using the APL programming language. The APL programming language is especially suited for this task. For example, to compute Walsh-Hadamard transformation matrix for N quantum states via a tensor product of N Hadamard matrices we need to iterate N-1 times only one line of the code. Initial study indicates the quantum mechanical amplitude of the solution is almost independent of the search space size and rapidly reaches 0.999 values with slight variations at higher decimal places.

  18. Tuning of an optimal fuzzy PID controller with stochastic algorithms for networked control systems with random time delay.

    PubMed

    Pan, Indranil; Das, Saptarshi; Gupta, Amitava

    2011-01-01

    An optimal PID and an optimal fuzzy PID have been tuned by minimizing the Integral of Time multiplied Absolute Error (ITAE) and squared controller output for a networked control system (NCS). The tuning is attempted for a higher order and a time delay system using two stochastic algorithms viz. the Genetic Algorithm (GA) and two variants of Particle Swarm Optimization (PSO) and the closed loop performances are compared. The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers.

  19. Fault detection using dynamic time warping (DTW) algorithm and discriminant analysis for swine wastewater treatment.

    PubMed

    Jun, B H

    2011-01-15

    This paper proposes a diagnosis system using dynamic time warping (DTW) and discriminant analysis with oxidation-reduction potential (ORP) and dissolved oxygen (DO) values for swine wastewater treatment. A full-scale sequencing batch reactor (SBR), which has an effective volume of 20 m(3), was auto-controlled, and the reaction phase was performed by a sub-cycle operation consisting of a repeated short cycle of the anoxic-aerobic step. Using ORP and DO profiles, SBR status was divided into four categories of normal and abnormal cases; these were influent disturbance, aeration controller fault, instrument trouble and inadequate raw wastewater feeding. Through the DTW process, difference values (D) were determined and classified into seven cases. In spite of the misclassification of high loading rates, the ORP profile provided good diagnosis results. However, the DO profiles detected five misclassifications that indicated different statuses. After the DTW process, several statistical values, including maximum value, minimum value, average value, standard deviation value and three quartile values, were extracted and applied to establish the discriminant function. The discriminant analysis allows one to classify seven cases with a percentage of 100% and 92.7% for ORP and DO profiles, respectively. Consequently, the study showed that ORP profiles are more efficient than DO profiles as diagnosis parameters and DTW diagnosis algorithms and discriminants.

  20. A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm.

    PubMed

    Dethier, Julie; Nuyujukian, Paul; Eliasmith, Chris; Stewart, Terry; Elassaad, Shauki A; Shenoy, Krishna V; Boahen, Kwabena

    2011-01-01

    Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm's velocity and mapped on to the SNN using the Neural Engineering Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neuromorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses.

  1. Handling time-expensive global optimization problems through the surrogate-enhanced evolutionary annealing-simplex algorithm

    NASA Astrophysics Data System (ADS)

    Tsoukalas, Ioannis; Kossieris, Panagiotis; Efstratiadis, Andreas; Makropoulos, Christos

    2015-04-01

    In water resources optimization problems, the calculation of the objective function usually presumes to first run a simulation model and then evaluate its outputs. In several cases, however, long simulation times may pose significant barriers to the optimization procedure. Often, to obtain a solution within a reasonable time, the user has to substantially restrict the allowable number of function evaluations, thus terminating the search much earlier than required by the problem's complexity. A promising novel strategy to address these shortcomings is the use of surrogate modelling techniques within global optimization algorithms. Here we introduce the Surrogate-Enhanced Evolutionary Annealing-Simplex (SE-EAS) algorithm that couples the strengths of surrogate modelling with the effectiveness and efficiency of the EAS method. The algorithm combines three different optimization approaches (evolutionary search, simulated annealing and the downhill simplex search scheme), in which key decisions are partially guided by numerical approximations of the objective function. The performance of the proposed algorithm is benchmarked against other surrogate-assisted algorithms, in both theoretical and practical applications (i.e. test functions and hydrological calibration problems, respectively), within a limited budget of trials (from 100 to 1000). Results reveal the significant potential of using SE-EAS in challenging optimization problems, involving time-consuming simulations.

  2. An Optimal Static Scheduling Algorithm for Hard Real-Time Systems Specified in a Prototyping Language

    DTIC Science & Technology

    1989-12-01

    identify by block number) e Computer Aided Prototyping System (CAPS) and the Prototype System Description Language (PSDL) are tools that have been...Prototype System Description Language ( PSDL ) are tools that have been designed to aid in rapid prototyping. Within the framework of CAPS the Execution...Prototype Development Using the Computer-Aided System ... ....... 6 Figure 4 Major Software Tools of CAPS ......... ................... 8 Figure 5 The

  3. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  4. Time and wavelength domain algorithms for chemical analysis by laser radar

    NASA Technical Reports Server (NTRS)

    Rosen, David L.; Gillespie, James B.

    1992-01-01

    Laser-induced fluorescence (LIF) is a promising technique for laser radar applications. Laser radar using LIF has already been applied to algae blooms and oil slicks. Laser radar using LIF has great potential for remote chemical analysis because LIF spectra are extremely sensitive to chemical composition. However, most samples in the real world contain mixtures of fluorescing components, not merely individual components. Multicomponent analysis of laser radar returns from mixtures is often difficult because LIF spectra from solids and liquids are very broad and devoid of line structure. Therefore, algorithms for interpreting LIF spectra from laser radar returns must be able to analyze spectra that overlap in multicomponent systems. This paper analyzes the possibility of using factor analysis-rank annihilation (FARA) to analyze emission-time matrices (ETM) from laser radar returns instead of excitation-emission matrices (EEM). The authors here define ETM as matrices where the rows (or columns) are emission spectra at fixed times and the columns (or rows) are temporal profiles for fixed emission wavelengths. Laser radar usually uses pulsed lasers for ranging purposes, which are suitable for measuring temporal profiles. Laser radar targets are hard instead of diffuse; that is, a definite surface emits the fluorescence instead of an extended volume. A hard target would not broaden the temporal profiles as would a diffuse target. Both fluorescence lifetimes and emission spectra are sensitive to chemical composition. Therefore, temporal profiles can be used instead of excitation spectra in FARA analysis of laser radar returns. The resulting laser radar returns would be ETM instead of EEM.

  5. Performance evaluation of gratings applied by genetic algorithm for the real-time optical interconnection

    NASA Astrophysics Data System (ADS)

    Yoon, Jin-Seon; Kim, Nam; Suh, HoHyung; Jeon, Seok Hee

    2000-03-01

    In this paper, gratings to apply for the optical interconnection are designed using a genetic algorithm (GA) for a robust and efficient schema. The real-time optical interconnection system architecture is composed with LC-SLM, CCD array detector, IBM-PC, He-Ne laser, and Fourier transform lens. A pixelated binary phase grating is displayed on LC-SLM and could interconnect incoming beams to desired output spots freely by real-time. So as to adapt a GA for finding near globally-cost solutions, a chromosome is coded as a binary integer of length 32 X 32, the stochastic tournament method for decreasing the stochastic sampling error is performed, and a single-point crossover having 16 X 16 block size is used. The characteristics on the several parameters are analyzed in the desired grating design. Firstly, as the analysis of the effect on the probability of crossover, a designed grating when the probability of crossover is 0.75 has a 74.7[%] high diffraction efficiency and a 1.73 X 10-1 uniformity quantitatively, where the probability of mutation is 0.001 and the population size is 300. Secondly, on the probability of mutation, a designed grating when the probability of mutation is 0.001 has a 74.4[%] high efficiency and a 1.61 X 10-1 uniformity quantitatively, where the probability of crossover is 1.0 and the population size is 300. Thirdly, on the population size, a designed grating when the population size is 300 and the generation is 400 has above 74[%] diffraction efficiency, where the probability of mutation is 0.001 and the probability of crossover is 1.0.

  6. Molecular Evolution of the HIV-1 Thai Epidemic between the Time of RV144 Immunogen Selection to the Execution of the Vaccine Efficacy Trial

    PubMed Central

    Tovanabutra, Sodsai; Rerks-Ngarm, Supachai; Nitayaphan, Sorachai; Eamsila, Chirapa; Kunasol, Prayura; Khamboonruang, Chirasak; Thongcharoen, Prasert; Namwat, Chawetsan; Premsri, Nakorn; Benenson, Michael; Morgan, Patricia; Bose, Meera; Sanders-Buell, Eric; Paris, Robert; Robb, Merlin L.; Birx, Deborah L.; De Souza, Mark S.; McCutchan, Francine E.; Michael, Nelson L.; Kim, Jerome H.

    2013-01-01

    The RV144 HIV-1 vaccine trial (Thailand, 2003 to 2009), using immunogens genetically matched to the regional epidemic, demonstrated the first evidence of efficacy for an HIV-1 vaccine. Here we studied the molecular evolution of the HIV-1 epidemic from the time of immunogen selection to the execution of the efficacy trial. We studied HIV-1 genetic diversity among 390 volunteers who were deferred from enrollment in RV144 due to preexisting HIV-1 infection using a multiregion hybridization assay, full-genome sequencing, and phylogenetic analyses. The subtype distribution was 91.7% CRF01_AE, 3.5% subtype B, 4.3% B/CRF01_AE recombinants, and 0.5% dual infections. CRF01_AE strains were 31% more diverse than the ones from the 1990s Thai epidemic. Sixty-nine percent of subtype B strains clustered with the cosmopolitan Western B strains. Ninety-three percent of B/CRF01_AE recombinants were unique; recombination breakpoint analysis showed that these strains were highly embedded within the larger network that integrates recombinants from East/Southeast Asia. Compared to Thai sequences from the early 1990s, the distance to the RV144 immunogens increased 52% to 68% for CRF01_AE Env immunogens and 12% to 29% for subtype B immunogens. Forty-three percent to 48% of CRF01_AE sequences differed from the sequence of the vaccine insert in Env variable region 2 positions 169 and 181, which were implicated in vaccine sieve effects in RV144. In conclusion, compared to the molecular picture at the early stages of vaccine development, our results show an overall increase in the genetic complexity of viruses in the Thai epidemic and in the distance to vaccine immunogens, which should be considered at the time of the analysis of the trial results. PMID:23576510

  7. Development of a Near Real-Time Hail Damage Swath Identification Algorithm for Vegetation

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Kori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    Every year in the Midwest and Great Plains, widespread greenness forms in conjunction with the latter part of the spring-summer growing season. This prevalent greenness forms as a result of the high concentration of agricultural areas having their crops reach their maturity before the fall harvest. This time of year also coincides with an enhanced hail frequency for the Great Plains (Cintineo et al. 2012). These severe thunderstorms can bring damaging winds and large hail that can result in damage to the surface vegetation. The spatial extent of the damage can relatively small concentrated area or be a vast swath of damage that is visible from space. These large areas of damage have been well documented over the years. In the late 1960s aerial photography was used to evaluate crop damage caused by hail. As satellite remote sensing technology has evolved, the identification of these hail damage streaks has increased. Satellites have made it possible to view these streaks in additional spectrums. Parker et al. (2005) documented two streaks using the Moderate Resolution Imaging Spectroradiometer (MODIS) that occurred in South Dakota. He noted the potential impact that these streaks had on the surface temperature and associated surface fluxes that are impacted by a change in temperature. Gallo et al. (2012) examined at the correlation between radar signatures and ground observations from storms that produced a hail damage swath in Central Iowa also using MODIS. Finally, Molthan et al. (2013) identified hail damage streaks through MODIS, Landsat-7, and SPOT observations of different resolutions for the development of a potential near-real time applications. The manual analysis of hail damage streaks in satellite imagery is both tedious and time consuming, and may be inconsistent from event to event. This study focuses on development of an objective and automatic algorithm to detect these areas of damage in a more efficient and timely manner. This study utilizes the

  8. Definitions of non-stationary vibration power for time-frequency analysis and computational algorithms based upon harmonic wavelet transform

    NASA Astrophysics Data System (ADS)

    Heo, YongHwa; Kim, Kwang-joon

    2015-02-01

    While the vibration power for a set of harmonic force and velocity signals is well defined and known, it is not as popular yet for a set of stationary random force and velocity processes, although it can be found in some literatures. In this paper, the definition of the vibration power for a set of non-stationary random force and velocity signals will be derived for the purpose of a time-frequency analysis based on the definitions of the vibration power for the harmonic and stationary random signals. The non-stationary vibration power, defined as the short-time average of the product of the force and velocity over a given frequency range of interest, can be calculated by three methods: the Wigner-Ville distribution, the short-time Fourier transform, and the harmonic wavelet transform. The latter method is selected in this paper because band-pass filtering can be done without phase distortions, and the frequency ranges can be chosen very flexibly for the time-frequency analysis. Three algorithms for the time-frequency analysis of the non-stationary vibration power using the harmonic wavelet transform are discussed. The first is an algorithm for computation according to the full definition, while the others are approximate. Noting that the force and velocity decomposed into frequency ranges of interest by the harmonic wavelet transform are constructed with coefficients and basis functions, for the second algorithm, it is suggested to prepare a table of time integrals of the product of the basis functions in advance, which are independent of the signals under analysis. How to prepare and utilize the integral table are presented. The third algorithm is based on an evolutionary spectrum. Applications of the algorithms to the time-frequency analysis of the vibration power transmitted from an excitation source to a receiver structure in a simple mechanical system consisting of a cantilever beam and a reaction wheel are presented for illustration.

  9. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  10. A survey of the baseline correction algorithms for real-time spectroscopy processing

    NASA Astrophysics Data System (ADS)

    Liu, Yuanjie; Yu, Yude

    2016-11-01

    In spectroscopy data analysis, such as Raman spectra, X-ray diffraction, fluorescence and etc., baseline drift is a ubiquitous issue. In high speed testing which generating huge data, automatic baseline correction method is very important for efficient data processing. We will survey the algorithms from classical Shirley background to state-of-the-art methods to present a summation for this specific field. Both advantages and defects of each algorithm are scrutinized. To compare the algorithms with each other, experiments are also carried out under SVM gap gain criteria to show the performance quantitatively. Finally, a rank table of these methods is built and the suggestions for practical choice of adequate algorithms is provided in this paper.

  11. An evaluation of biosurveillance grid--dynamic algorithm distribution across multiple computer nodes.

    PubMed

    Tsai, Ming-Chi; Tsui, Fu-Chiang; Wagner, Michael M

    2007-10-11

    Performing fast data analysis to detect disease outbreaks plays a critical role in real-time biosurveillance. In this paper, we described and evaluated an Algorithm Distribution Manager Service (ADMS) based on grid technologies, which dynamically partition and distribute detection algorithms across multiple computers. We compared the execution time to perform the analysis on a single computer and on a grid network (3 computing nodes) with and without using dynamic algorithm distribution. We found that algorithms with long runtime completed approximately three times earlier in distributed environment than in a single computer while short runtime algorithms performed worse in distributed environment. A dynamic algorithm distribution approach also performed better than static algorithm distribution approach. This pilot study shows a great potential to reduce lengthy analysis time through dynamic algorithm partitioning and parallel processing, and provides the opportunity of distributing algorithms from a client to remote computers in a grid network.

  12. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  13. A real-time algorithm for integrating differential satellite and inertial navigation information during helicopter approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hoang, TY

    1994-01-01

    A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).

  14. Autonomous Time-Frequency Cropping and Feature-Extraction Algorithms for Classification of LPI Radar Modulations

    DTIC Science & Technology

    2006-06-01

    INTERCEPT ( LPI ) SIGNAL MODULATIONS In this chapter nine LPI radar modulations are described: FMCW , Frank, P1, P2, P3, P4, T1(n), T2(n). Although not a LPI ...FREQUENCY CROPPING AND FEATURE-EXTRACTION ALGORITHMS FOR CLASSIFICATION OF LPI RADAR MODULATIONS by Eric R. Zilberman June 2006 Thesis...and Feature- Extraction Algorithms for Classification of LPI Radar Modulations 6. AUTHOR Eric R. Zilberman 5. FUNDING NUMBERS 7. PERFORMING

  15. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  16. Wavelet power spectrum-based autofocusing algorithm for time delayed and integration charge coupled device space camera.

    PubMed

    Tao, Shuping; Jin, Guang; Zhang, Xuyan; Qu, Hongsong; An, Yuan

    2012-07-20

    A novel autofocusing algorithm using the directional wavelet power spectrum is proposed for time delayed and integration charge coupled device (TDI CCD) space cameras, which overcomes the difficulty of focus measure for the real-time change of imaging scenes. Using the multiresolution and band-pass characteristics of wavelet transform to improve the power spectrum based on fast Fourier transform (FFT), the wavelet power spectrum is less sensitive to the variance of scenes. Moreover, the new focus measure can effectively eliminate the impact of image motion mismatching by the directional selection. We test the proposed method's performance on synthetic images as well as a real ground experiment for one TDI CCD prototype camera, and compare it with the focus measure based on the existing FFT spectrum. The simulation results show that the new focus measure can effectively express the defocused states for the real remote sensing images. The error ratio is only 0.112, while the prevalent algorithm based on the FFT spectrum is as high as 0.4. Compared with the FFT-based method, the proposed algorithm performs at a high reliability in the real imaging experiments, where it reduces the instability from 0.600 to 0.161. Two experimental results demonstrate that the proposed algorithm has the characteristics of good monotonicity, high sensitivity, and accuracy. The new algorithm can satisfy the autofocusing requirements for TDI CCD space cameras.

  17. Experimental assessment of static and dynamic algorithms for gene regulation inference from time series expression data.

    PubMed

    Lopes, Miguel; Bontempi, Gianluca

    2013-01-01

    Accurate inference of causal gene regulatory networks from gene expression data is an open bioinformatics challenge. Gene interactions are dynamical processes and consequently we can expect that the effect of any regulation action occurs after a certain temporal lag. However such lag is unknown a priori and temporal aspects require specific inference algorithms. In this paper we aim to assess the impact of taking into consideration temporal aspects on the final accuracy of the inference procedure. In particular we will compare the accuracy of static algorithms, where no dynamic aspect is considered, to that of fixed lag and adaptive lag algorithms in three inference tasks from microarray expression data. Experimental results show that network inference algorithms that take dynamics into account perform consistently better than static ones, once the considered lags are properly chosen. However, no individual algorithm stands out in all three inference tasks, and the challenging nature of network inference tasks is evidenced, as a large number of the assessed algorithms does not perform better than random.

  18. An algorithm on distributed mining association rules

    NASA Astrophysics Data System (ADS)

    Xu, Fan

    2005-12-01

    With the rapid development of the Internet/Intranet, distributed databases have become a broadly used environment in various areas. It is a critical task to mine association rules in distributed databases. The algorithms of distributed mining association rules can be divided into two classes. One is a DD algorithm, and another is a CD algorithm. A DD algorithm focuses on data partition optimization so as to enhance the efficiency. A CD algorithm, on the other hand, considers a setting where the data is arbitrarily partitioned horizontally among the parties to begin with, and focuses on parallelizing the communication. A DD algorithm is not always applicable, however, at the time the data is generated, it is often already partitioned. In many cases, it cannot be gathered and repartitioned for reasons of security and secrecy, cost transmission, or sheer efficiency. A CD algorithm may be a more appealing solution for systems which are naturally distributed over large expenses, such as stock exchange and credit card systems. An FDM algorithm provides enhancement to CD algorithm. However, CD and FDM algorithms are both based on net-structure and executing in non-shareable resources. In practical applications, however, distributed databases often are star-structured. This paper proposes an algorithm based on star-structure networks, which are more practical in application, have lower maintenance costs and which are more practical in the construction of the networks. In addition, the algorithm provides high efficiency in communication and good extension in parallel computation.

  19. A real-time plane-wave decomposition algorithm for characterizing perforated liners damping at multiple mode frequencies.

    PubMed

    Zhao, Dan

    2011-03-01

    Perforated liners with a narrow frequency range are widely used as acoustic dampers to stabilize combustion systems. When the frequency of unstable modes present in the combustion system is within the effective frequency range, the liners can efficiently dissipate acoustic waves. The fraction of the incident waves being absorbed (known as power absorption coefficient) is generally used to characterize the liners damping. To estimate it, plane waves either side of the liners need to be decomposed and characterized. For this, a real-time algorithm is developed. Emphasis is being placed on its ability to online decompose plane waves at multiple mode frequencies. The performance of the algorithm is evaluated first in a numerical model with two unstable modes. It is then experimentally implemented in an acoustically driven pipe system with a lined section attached. The acoustic damping of perforated liners is continuously characterized in real-time. Comparison is then made between the results from the algorithm and those from the short-time fast Fourier transform (FFT)-based techniques, which are typically used in industry. It was found that the real-time algorithm allows faster tracking of the liners damping, even when the forcing frequency was suddenly changed.

  20. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  1. Parametric Timing Analysis

    SciTech Connect

    Vivancos, E; Healy, C; Mueller, F; Whalley, D

    2001-05-09

    Embedded systems often have real-time constraints. Traditional timing analysis statically determines the maximum execution time of a task or a program in a real-time system. These systems typically depend on the worst-case execution time of tasks in order to make static scheduling decisions so that tasks can meet their deadlines. Static determination of worst-case execution times imposes numerous restrictions on real-time programs, which include that the maximum number of iterations of each loop must be known statically. These restrictions can significantly limit the class of programs that would be suitable for a real-time embedded system. This paper describes work-in-progress that uses static timing analysis to aid in making dynamic scheduling decisions. For instance, different algorithms with varying levels of accuracy may be selected based on the algorithm's predicted worst-case execution time and the time allotted for the task. We represent the worst-case execution time of a function or a loop as a formula, where the unknown values affecting the execution time are parameterized. This parametric timing analysis produces formulas that can then be quickly evaluated at run-time so dynamic scheduling decisions can be made with little overhead. Benefits of this work include expanding the class of applications that can be used in a real-time system, improving the accuracy of dynamic scheduling decisions, and more effective utilization of system resources. This paper describes how static timing analysis can be used to aid in making dynamic scheduling decisions. The WCET of a function or a loop is represented as a formula, where the values affecting the execution time are parameterized. Such formulas can then be quickly evaluated at run-time so dynamic scheduling decisions can be made when scheduling a task or choosing algorithms within a task. Benefits of this parametric timing analysis include expanding the class of applications that can be used in a real-time system

  2. Single- and double-difference algorithms for position and time-delay calibration of transducer-elements in a sparse array.

    PubMed

    Li, Yue; Sharp, Ian; Hedley, Mark; Ho, Phil; Guo, Y Jay

    2007-06-01

    A method for the calibration of the position and time delay of transducer elements in a large, sparse array used for underwater, high-resolution, ultrasound imaging has been described in a previous work. This algorithm is based on the direct algorithm used in the global positioning system (GPS), but the wave propagation speed is treated as one of the to-be-calibrated parameters. In this article, the performance of two other commonly used GPS algorithms, namely the single-difference algorithm and the double-difference algorithm, is evaluated. The calibration of the propagation speed also is integrated into these two algorithms. Furthermore, a novel, least-squares method is proposed to calibrate the time delay associated with each transducer element for these two algorithms. The performances of these algorithms are theoretically analyzed and evaluated using numerical analysis and simulation study. The performance of the direct algorithm, the single-difference algorithm, and the double-difference algorithm is compared. It was found that the single-difference algorithm has the best performance among the three algorithms for the current application, and it is capable of calibrating the position and time delay of transducer elements to an accuracy of one-tenth of a wavelength.

  3. `Inter-Arrival Time' Inspired Algorithm and its Application in Clustering and Molecular Phylogeny

    NASA Astrophysics Data System (ADS)

    Kolekar, Pandurang S.; Kale, Mohan M.; Kulkarni-Kale, Urmila

    2010-10-01

    Bioinformatics, being multidisciplinary field, involves applications of various methods from allied areas of Science for data mining using computational approaches. Clustering and molecular phylogeny is one of the key areas in Bioinformatics, which help in study of classification and evolution of organisms. Molecular phylogeny algorithms can be divided into distance based and character based methods. But most of these methods are dependent on pre-alignment of sequences and become computationally intensive with increase in size of data and hence demand alternative efficient approaches. `Inter arrival time distribution' (IATD) is a popular concept in the theory of stochastic system modeling but its potential in molecular data analysis has not been fully explored. The present study reports application of IATD in Bioinformatics for clustering and molecular phylogeny. The proposed method provides IATDs of nucleotides in genomic sequences. The distance function based on statistical parameters of IATDs is proposed and distance matrix thus obtained is used for the purpose of clustering and molecular phylogeny. The method is applied on a dataset of 3' non-coding region sequences (NCR) of Dengue virus type 3 (DENV-3), subtype III, reported in 2008. The phylogram thus obtained revealed the geographical distribution of DENV-3 isolates. Sri Lankan DENV-3 isolates were further observed to be clustered in two sub-clades corresponding to pre and post Dengue hemorrhagic fever emergence groups. These results are consistent with those reported earlier, which are obtained using pre-aligned sequence data as an input. These findings encourage applications of the IATD based method in molecular phylogenetic analysis in particular and data mining in general.

  4. Coaching to vision versus coaching to improvement needs: a preliminary investigation on the differential impacts of fostering positive and negative emotion during real time executive coaching sessions

    PubMed Central

    Howard, Anita R.

    2015-01-01

    Drawing on intentional change theory (ICT; Boyatzis, 2006), this study examined the differential impact of inducing coaching recipients’ vision/positive emotion versus improvement needs/negative emotion during real time executive coaching sessions. A core aim of the study was to empirically test two central ICT propositions on the effects of using the coached person’s Positive Emotional Attractor (vision/PEA) versus Negative Emotional Attractor (improvement needs/NEA) as the anchoring framework of a onetime, one-on-one coaching session on appraisal of 360° feedback and discussion of possible change goals. Eighteen coaching recipients were randomly assigned to two coaching conditions, the coaching to vision/PEA condition and the coaching to improvement needs/NEA condition. Two main hypotheses were tested. Hypothesis1 predicted that participants in the vision/PEA condition would show higher levels of expressed positive emotion during appraisal of 360° feedback results and discussion of change goals than recipients in the improvement needs/NEA condition. Hypothesis2 predicted that vision/PEA participants would show lower levels of stress immediately after the coaching session than improvement needs/NEA participants. Findings showed that coaching to vision/the PEA fostered significantly lower levels of expressed negative emotion and anger during appraisal of 360° feedback results as compared to coaching to improvements needs/the NEA. Vision-focused coaching also fostered significantly greater exploration of personal passions and future desires, and more positive engagement during 360° feedback appraisal. No significant differences between the two conditions were found in emotional processing during discussion of change goals or levels of stress immediately after the coaching session. Current findings suggest that vision/PEA arousal versus improvement needs/NEA arousal impact the coaching process in quite different ways; that the coach’s initial framing of the

  5. Coaching to vision versus coaching to improvement needs: a preliminary investigation on the differential impacts of fostering positive and negative emotion during real time executive coaching sessions.

    PubMed

    Howard, Anita R

    2015-01-01

    Drawing on intentional change theory (ICT; Boyatzis, 2006), this study examined the differential impact of inducing coaching recipients' vision/positive emotion versus improvement needs/negative emotion during real time executive coaching sessions. A core aim of the study was to empirically test two central ICT propositions on the effects of using the coached person's Positive Emotional Attractor (vision/PEA) versus Negative Emotional Attractor (improvement needs/NEA) as the anchoring framework of a onetime, one-on-one coaching session on appraisal of 360° feedback and discussion of possible change goals. Eighteen coaching recipients were randomly assigned to two coaching conditions, the coaching to vision/PEA condition and the coaching to improvement needs/NEA condition. Two main hypotheses were tested. Hypothesis1 predicted that participants in the vision/PEA condition would show higher levels of expressed positive emotion during appraisal of 360° feedback results and discussion of change goals than recipients in the improvement needs/NEA condition. Hypothesis2 predicted that vision/PEA participants would show lower levels of stress immediately after the coaching session than improvement needs/NEA participants. Findings showed that coaching to vision/the PEA fostered significantly lower levels of expressed negative emotion and anger during appraisal of 360° feedback results as compared to coaching to improvements needs/the NEA. Vision-focused coaching also fostered significantly greater exploration of personal passions and future desires, and more positive engagement during 360° feedback appraisal. No significant differences between the two conditions were found in emotional processing during discussion of change goals or levels of stress immediately after the coaching session. Current findings suggest that vision/PEA arousal versus improvement needs/NEA arousal impact the coaching process in quite different ways; that the coach's initial framing of the

  6. Hardness Measures for Maze Problems Parameterized by Obstacle Ratio and Performance Analysis of Real-Time Search Algorithms

    NASA Astrophysics Data System (ADS)

    Mizusawa, Masataka; Kurihara, Masahito

    Although the maze (or gridworld) is one of the most widely used benchmark problems for real-time search algorithms, it is not sufficiently clear how the difference in the density of randomly positioned obstacles affects the structure of the state spaces and the performance of the algorithms. In particular, recent studies of the so-called phase transition phenomena that could cause dramatic change in their performance in a relatively small parameter range suggest that we should evaluate the performance in a parametric way with the parameter range wide enough to cover potential transition areas. In this paper, we present two measures for characterizing the hardness of randomly generated mazes parameterized by obstacle ratio and relate them to the performance of real-time search algorithms. The first measure is the entropy calculated from the probability of existence of solutions. The second is a measure based on total initial heuristic error between the actual cost and its heuristic estimation. We show that the maze problems are the most complicated in both measures when the obstacle ratio is around 41%. We then solve the parameterized maze problems with the well-known real-time search algorithms RTA*, LRTA*, and MARTA* to relate their performance to the proposed measures. Evaluating the number of steps required for a single problem solving by the three algorithms and the number of those required for the convergence of the learning process in LRTA*, we show that they all have a peak when the obstacle ratio is around 41%. The results support the relevance of the proposed measures. We also discuss the performance of the algorithms in terms of other statistical measures to get a quantitative, deeper understanding of their behavior.

  7. Novel algorithm and MATLAB-based program for automated power law analysis of single particle, time-dependent mean-square displacement

    NASA Astrophysics Data System (ADS)

    Umansky, Moti; Weihs, Daphne

    2012-08-01

    should also be backwards compatible. Symbolic Math Toolboxes (5.5) is required. The Curve Fitting Toolbox (3.0) is recommended. Computer: Tested on Windows only, yet should work on any computer running MATLAB. In Windows 7, should be used as administrator, if the user is not the administrator the program may not be able to save outputs and temporary outputs to all locations. Operating system: Any supporting MATLAB (MathWorks Inc.) v7.11 / 2010b or higher. Supplementary material: Sample output files (approx. 30 MBytes) are available. Classification: 12 External routines: Several MATLAB subfunctions (m-files), freely available on the web, were used as part of and included in, this code: count, NaN suite, parseArgs, roundsd, subaxis, wcov, wmean, and the executable pdfTK.exe. Nature of problem: In many physical and biophysical areas employing single-particle tracking, having the time-dependent power-laws governing the time-averaged meansquare displacement (MSD) of a single particle is crucial. Those power laws determine the mode-of-motion and hint at the underlying mechanisms driving motion. Accurate determination of the power laws that describe each trajectory will allow categorization into groups for further analysis of single trajectories or ensemble analysis, e.g. ensemble and time-averaged MSD. Solution method: The algorithm in the provided program automatically analyzes and fits time-dependent power laws to single particle trajectories, then group particles according to user defined cutoffs. It accepts time-dependent trajectories of several particles, each trajectory is run through the program, its time-averaged MSD is calculated, and power laws are determined in regions where the MSD is linear on a log-log scale. Our algorithm searches for high-curvature points in experimental data, here time-dependent MSD. Those serve as anchor points for determining the ranges of the power-law fits. Power-law scaling is then accurately determined and error estimations of the

  8. Local flow management/profile descent algorithm. Fuel-efficient, time-controlled profiles for the NASA TSRV airplane

    NASA Technical Reports Server (NTRS)

    Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.

    1986-01-01

    The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.

  9. Applying the Ramer-Douglas-Peucker algorithm to compress and characterize time-series and spatial fields of precipitation

    NASA Astrophysics Data System (ADS)

    Ehret, Uwe; Neuper, Malte

    2014-05-01

    Well known in image processing and computer graphics, the Ramer-Douglas-Peucker(RDP) algorithm (Ramer, 1972; Douglas and Peucker, 1973) is a procedure to approximate a polygon (lines or areas) by a subset of its nodes. Typically it is used to represent a polygonal feature on a larger scale, e.g. when zooming out of an image. The algorithm is simple but effective: Starting from the simplest possible approximation of the original polygon (for a line it is the start and end point), the simplified polygon is built by successively adding always the node of the original polygon farthest from the simplified polygon. This is repeated until a chosen agreement between the original and the simplified polygon is reached. Compared to other smoothing and compression algorithms like moving-average filters or block aggregation, the RDP algorithm has the advantages that i) the simplified polygon is built from the original points, i.e. extreme values are preserved and ii) that the variability of the original polygon is preserved in a scale-independent manner, i.e. the simplified polygon is high-resolution where necessary and low-resolution where possible. Applying the RDP algorithm to time series of precipitation or 2d spatial fields of radar rainfall often reveals a large degree of compressibility while losing almost no information. In general, this is the case for any auto-correlated polygon such as discharge time series etc. While the RDP algorithm is thus interesting as a very efficient tool for compression, it can also be used to characterize time series or spatial fields with respect to their temporal or spatial structure by relating, over successive steps of simplification, the compression achieved and information lost. We will present and discuss the characteristics of the RDP-based compression and characterization at various examples, both observed (rainfall and discharge time series, 2-d radar rainfall fields) and artificial (random noise fields, random fields with known

  10. Snow cover detection algorithm using dynamic time warping method and reflectances of MODIS solar spectrum channels

    NASA Astrophysics Data System (ADS)

    Lee, Kyeong-sang; Choi, Sungwon; Seo, Minji; Lee, Chang suk; Seong, Noh-hun; Han, Kyung-Soo

    2016-10-01

    Snow cover is biggest single component of cryosphere. The Snow is covering the ground in the Northern Hemisphere approximately 50% in winter season and is one of climate factors that affects Earth's energy budget because it has higher reflectance than other land types. Also, snow cover has an important role about hydrological modeling and water resource management. For this reason, accurate detection of snow cover acts as an essential element for regional water resource management. Snow cover detection using satellite-based data have some advantages such as obtaining wide spatial range data and time-series observations periodically. In the case of snow cover detection using satellite data, the discrimination of snow and cloud is very important. Typically, Misclassified cloud and snow pixel can lead directly to error factor for retrieval of satellite-based surface products. However, classification of snow and cloud is difficult because cloud and snow have similar optical characteristics and are composed of water or ice. But cloud and snow has different reflectance in 1.5 1.7 μm wavelength because cloud has lower grain size and moisture content than snow. So, cloud and snow shows difference reflectance patterns change according to wavelength. Therefore, in this study, we perform algorithm for classifying snow cover and cloud with satellite-based data using Dynamic Time Warping (DTW) method which is one of commonly used pattern analysis such as speech and fingerprint recognitions and reflectance spectral library of snow and cloud. Reflectance spectral library is constructed in advance using MOD21km (MODIS Level1 swath 1km) data that their reflectance is six channels including 3 (0.466μm), 4 (0.554μm), 1 (0.647μm), 2 (0.857μm), 26 (1.382μm) and 6 (1.629μm). We validate our result using MODIS RGB image and MOD10 L2 swath (MODIS swath snow cover product). And we use PA (Producer's Accuracy), UA (User's Accuracy) and CI (Comparison Index) as validation criteria

  11. A combined Event-Driven/Time-Driven molecular dynamics algorithm for the simulation of shock waves in rarefied gases

    SciTech Connect

    Valentini, Paolo Schwartzentruber, Thomas E.

    2009-12-10

    A novel combined Event-Driven/Time-Driven (ED/TD) algorithm to speed-up the Molecular Dynamics simulation of rarefied gases using realistic spherically symmetric soft potentials is presented. Due to the low density regime, the proposed method correctly identifies the time that must elapse before the next interaction occurs, similarly to Event-Driven Molecular Dynamics. However, each interaction is treated using Time-Driven Molecular Dynamics, thereby integrating Newton's Second Law using the sufficiently small time step needed to correctly resolve the atomic motion. Although infrequent, many-body interactions are also accounted for with a small approximation. The combined ED/TD method is shown to correctly reproduce translational relaxation in argon, described using the Lennard-Jones potential. For densities between {rho}=10{sup -4}kg/m{sup 3} and {rho}=10{sup -1}kg/m{sup 3}, comparisons with kinetic theory, Direct Simulation Monte Carlo, and pure Time-Driven Molecular Dynamics demonstrate that the ED/TD algorithm correctly reproduces the proper collision rates and the evolution toward thermal equilibrium. Finally, the combined ED/TD algorithm is applied to the simulation of a Mach 9 shock wave in rarefied argon. Density and temperature profiles as well as molecular velocity distributions accurately match DSMC results, and the shock thickness is within the experimental uncertainty. For the problems considered, the ED/TD algorithm ranged from several hundred to several thousand times faster than conventional Time-Driven MD. Moreover, the force calculation to integrate the molecular trajectories is found to contribute a negligible amount to the overall ED/TD simulation time. Therefore, this method could pave the way for the application of much more refined and expensive interatomic potentials, either classical or first-principles, to Molecular Dynamics simulations of shock waves in rarefied gases, involving vibrational nonequilibrium and chemical reactivity.

  12. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    NASA Technical Reports Server (NTRS)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  13. Preliminary Development and Evaluation of Lightning Jump Algorithms for the Real-Time Detection of Severe Weather

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the

  14. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  15. A real-time photo-realistic rendering algorithm of ocean color based on bio-optical model

    NASA Astrophysics Data System (ADS)

    Ma, Chunyong; Xu, Shu; Wang, Hongsong; Tian, Fenglin; Chen, Ge

    2016-12-01

    A real-time photo-realistic rendering algorithm of ocean color is introduced in the paper, which considers the impact of ocean bio-optical model. The ocean bio-optical model mainly involves the phytoplankton, colored dissolved organic material (CDOM), inorganic suspended particle, etc., which have different contributions to absorption and scattering of light. We decompose the emergent light of the ocean surface into the reflected light from the sun and the sky, and the subsurface scattering light. We establish an ocean surface transmission model based on ocean bidirectional reflectance distribution function (BRDF) and the Fresnel law, and this model's outputs would be the incident light parameters of subsurface scattering. Using ocean subsurface scattering algorithm combined with bio-optical model, we compute the scattering light emergent radiation in different directions. Then, we blend the reflection of sunlight and sky light to implement the real-time ocean color rendering in graphics processing unit (GPU). Finally, we use two kinds of radiance reflectance calculated by Hydrolight radiative transfer model and our algorithm to validate the physical reality of our method, and the results show that our algorithm can achieve real-time highly realistic ocean color scenes.

  16. Development of a Real-Time Pulse Processing Algorithm for TES-Based X-Ray Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Tan, Hui; Hennig, Wolfgang; Warburton, William K.; Doriese, W. Bertrand; Kilbourne, Caroline A.

    2011-01-01

    We report here a real-time pulse processing algorithm for superconducting transition-edge sensor (TES) based x-ray microcalorimeters. TES-based. microca1orimeters offer ultra-high energy resolutions, but the small volume of each pixel requires that large arrays of identical microcalorimeter pixe1s be built to achieve sufficient detection efficiency. That in turn requires as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of data to a host computer for post-processing. Therefore, a real-time pulse processing algorithm that not only can be implemented in the readout electronics but also achieve satisfactory energy resolutions is desired. We have developed an algorithm that can be easily implemented. in hardware. We then tested the algorithm offline using several data sets acquired with an 8 x 8 Goddard TES x-ray calorimeter array and 2x16 NIST time-division SQUID multiplexer. We obtained an average energy resolution of close to 3.0 eV at 6 keV for the multiplexed pixels while preserving over 99% of the events in the data sets.

  17. Delay Analysis of Max-Weight Queue Algorithm for Time-Varying Wireless Ad hoc Networks—Control Theoretical Approach

    NASA Astrophysics Data System (ADS)

    Chen, Junting; Lau, Vincent K. N.

    2013-01-01

    Max weighted queue (MWQ) control policy is a widely used cross-layer control policy that achieves queue stability and a reasonable delay performance. In most of the existing literature, it is assumed that optimal MWQ policy can be obtained instantaneously at every time slot. However, this assumption may be unrealistic in time varying wireless systems, especially when there is no closed-form MWQ solution and iterative algorithms have to be applied to obtain the optimal solution. This paper investigates the convergence behavior and the queue delay performance of the conventional MWQ iterations in which the channel state information (CSI) and queue state information (QSI) are changing in a similar timescale as the algorithm iterations. Our results are established by studying the stochastic stability of an equivalent virtual stochastic dynamic system (VSDS), and an extended Foster-Lyapunov criteria is applied for the stability analysis. We derive a closed form delay bound of the wireless network in terms of the CSI fading rate and the sensitivity of MWQ policy over CSI and QSI. Based on the equivalent VSDS, we propose a novel MWQ iterative algorithm with compensation to improve the tracking performance. We demonstrate that under some mild conditions, the proposed modified MWQ algorithm converges to the optimal MWQ control despite the time-varying CSI and QSI.

  18. Multiprocessor execution of functional programs

    SciTech Connect

    Goldberg, B. )

    1988-10-01

    Functional languages have recently gained attention as vehicles for programming in a concise and element manner. In addition, it has been suggested that functional programming provides a natural methodology for programming multiprocessor computers. This paper describes research that was performed to demonstrate that multiprocessor execution of functional programs on current multiprocessors is feasible, and results in a significant reduction in their execution times. Two implementations of the functional language ALFL were built on commercially available multiprocessors. Alfalfa is an implementation on the Intel iPSC hypercube multiprocessor, and Buckwheat is an implementation on the Encore Multimax shared-memory multiprocessor. Each implementation includes a compiler that performs automatic decomposition of ALFL programs and a run-time system that supports their execution. The compiler is responsible for detecting the inherent parallelism in a program, and decomposing the program into a collection of tasks, called serial combinators, that can be executed in parallel. The abstract machine model supported by Alfalfa and Buckwheat is called heterogeneous graph reduction, which is a hybrid of graph reduction and conventional stack-oriented execution. This model supports parallelism, lazy evaluation, and higher order functions while at the same time making efficient use of the processors in the system. The Alfalfa and Buckwheat runtime systems support dynamic load balancing, interprocessor communication (if required), and storage management. A large number of experiments were performed on Alfalfa and Buckwheat for a variety of programs. The results of these experiments, as well as the conclusions drawn from them, are presented.

  19. Discrete-Time Nonzero-Sum Games for Multiplayer Using Policy-Iteration-Based Adaptive Dynamic Programming Algorithms.

    PubMed

    Zhang, Huaguang; Jiang, He; Luo, Chaomin; Xiao, Geyang

    2016-10-03

    In this paper, we investigate the nonzero-sum games for a class of discrete-time (DT) nonlinear systems by using a novel policy iteration (PI) adaptive dynamic programming (ADP) method. The main idea of our proposed PI scheme is to utilize the iterative ADP algorithm to obtain the iterative control policies, which not only ensure the system to achieve stability but also minimize the performance index function for each player. This paper integrates game theory, optimal control theory, and reinforcement learning technique to formulate and handle the DT nonzero-sum games for multiplayer. First, we design three actor-critic algorithms, an offline one and two online ones, for the PI scheme. Subsequently, neural networks are employed to implement these algorithms and the corresponding stability analysis is also provided via the Lyapunov theory. Finally, a numerical simulation example is presented to demonstrate the effectiveness of our proposed approach.

  20. An iterated greedy algorithm for the single-machine total weighted tardiness problem with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Deng, Guanlong; Gu, Xingsheng

    2014-03-01

    This article presents an enhanced iterated greedy (EIG) algorithm that searches both insert and swap neighbourhoods for the single-machine total weighted tardiness problem with sequence-dependent setup times. Novel elimination rules and speed-ups are proposed for the swap move to make the employment of swap neighbourhood worthwhile due to its reduced computational expense. Moreover, a perturbation operator is newly designed as a substitute for the existing destruction and construction procedures to prevent the search from being attracted to local optima. To validate the proposed algorithm, computational experiments are conducted on a benchmark set from the literature. The results show that the EIG outperforms the existing state-of-the-art algorithms for the considered problem.

  1. A new method of real-time signal extraction for diffuse reflection laser ranging based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Peng; Zhang, Yan; Qian, Weiping

    2015-10-01

    Diffuse reflection laser ranging is one of the feasible ways to realize high precision measurement of the space debris. However, the weak echo of diffuse reflection results in a poor signal-to-noise ratio. Thus, it is difficult to realize the real-time signal extraction for diffuse reflection laser ranging when echo signal photons are blocked by a large amount of noise photons. The Genetic Algorithm, originally evolved from the idea of natural selection process, is a heuristic search algorithm which is famous for the adaptive optimization and the global search ability. To the best of our knowledge, this paper is the first one to propose a method of real-time signal extraction for diffuse reflection laser ranging based on Genetic Algorithm. The extraction results are regarded as individuals in the population. Besides, short-term linear fitting degree and data correlation level are used as selection criteria to search for an optimal solution. Fine search in the real-time data part gives the suitable new data quickly in real-time signal extraction. A coarse search in both historical data and real-time data after the fine search is designed. The co-evolution of both parts can increase the search accuracy of real-time data as well as the precision of the history data. Simulation experiments show that our method has good signal extraction capability in poor signal-to-noise ratio circumstance, especially for data with high correlation.

  2. The effects of initial conditions and control time on optimal actuator placement via a max-min Genetic Algorithm

    SciTech Connect

    Redmond, J.; Parker, G.

    1993-07-01

    This paper examines the role of the control objective and the control time in determining fuel-optimal actuator placement for structural vibration suppression. A general theory is developed that can be easily extended to include alternative performance metrics such as energy and time-optimal control. The performance metric defines a convex admissible control set which leads to a max-min optimization problem expressing optimal location as a function of initial conditions and control time. A solution procedure based on a nested Genetic Algorithm is presented and applied to an example problem. Results indicate that the optimal locations vary widely as a function of control time and initial conditions.

  3. Building Student Momentum from High School into College. Ready or Not: It's Time to Rethink the 12th Grade. Executive Summary

    ERIC Educational Resources Information Center

    Barnett, Elisabeth

    2016-01-01

    This executive summary describes a paper that is part of a series intended to encourage the nation's secondary and postsecondary systems to take joint responsibility for substantially increasing the number of young people who are prepared for college and career success. In this report, author Elisabeth Barnett of the Community College Research…

  4. Drifting from Slow to "D'oh!": Working Memory Capacity and Mind Wandering Predict Extreme Reaction Times and Executive Control Errors

    ERIC Educational Resources Information Center

    McVay, Jennifer C.; Kane, Michael J.

    2012-01-01

    A combined experimental, individual-differences, and thought-sampling study tested the predictions of executive attention (e.g., Engle & Kane, 2004) and coordinative binding (e.g., Oberauer, Suss, Wilhelm, & Sander, 2007) theories of working memory capacity (WMC). We assessed 288 subjects' WMC and their performance and mind-wandering rates…

  5. The Relation between Executive Functioning, Reaction Time, Naming Speed, and Single Word Reading in Children with Typical Development and Language Impairments

    ERIC Educational Resources Information Center

    Messer, David; Henry, Lucy A.; Nash, Gilly

    2016-01-01

    Background: Few investigations have examined the relationship between a comprehensive range of executive functioning (EF) abilities and reading. Aims: Our investigation identified components of EF that independently predicted single word reading, and determined whether their predictive role remained when additional variables were included in the…

  6. The Future College Executive.

    ERIC Educational Resources Information Center

    Boston Coll., Chestnut Hill, MA.

    This conference report examines various problems facing university administrators and discusses the future role of the executive in American colleges and universities. Conference papers concern the future college executive; efficiency, accountability and the college executive; administrative concerns; and the rights of college administrators. (MJM)

  7. Chief executives. Staying afloat.

    PubMed

    Spurgeon, P; Clark, J; Smith, C

    2001-09-27

    A study of chief executives identified the ability to prioritize, clear vision, resilence, and willingness to take decisions as key factors in success. Some wanted more active involvement from the regional director. Chief executives could be subject to 360-degree assessment. More work is needed to establish the compatibility of chief executives and chairs.

  8. Platform for Real-Time Simulation of Dynamic Systems and Hardware-in-the-Loop for Control Algorithms

    PubMed Central

    de Souza, Isaac D. T.; Silva, Sergio N.; Teles, Rafael M.; Fernandes, Marcelo A. C.

    2014-01-01

    The development of new embedded algorithms for automation and control of industrial equipment usually requires the use of real-time testing. However, the equipment required is often expensive, which means that such tests are often not viable. The objective of this work was therefore to develop an embedded platform for the distributed real-time simulation of dynamic systems. This platform, called the Real-Time Simulator for Dynamic Systems (RTSDS), could be applied in both industrial and academic environments. In industrial applications, the RTSDS could be used to optimize embedded control algorithms. In the academic sphere, it could be used to support research into new embedded solutions for automation and control and could also be used as a tool to assist in undergraduate and postgraduate teaching related to the development of projects concerning on-board control systems. PMID:25320906

  9. [Fractal dimension and histogram method: algorithm and some preliminary results of noise-like time series analysis].

    PubMed

    Pancheliuga, V A; Pancheliuga, M S

    2013-01-01

    In the present work a methodological background for the histogram method of time series analysis is developed. Connection between shapes of smoothed histograms constructed on the basis of short segments of time series of fluctuations and the fractal dimension of the segments is studied. It is shown that the fractal dimension possesses all main properties of the histogram method. Based on it a further development of fractal dimension determination algorithm is proposed. This algorithm allows more precision determination of the fractal dimension by using the "all possible combination" method. The application of the method to noise-like time series analysis leads to results, which could be obtained earlier only by means of the histogram method based on human expert comparisons of histograms shapes.

  10. Platform for real-time simulation of dynamic systems and hardware-in-the-loop for control algorithms.

    PubMed

    de Souza, Isaac D T; Silva, Sergio N; Teles, Rafael M; Fernandes, Marcelo A C

    2014-10-15

    The development of new embedded algorithms for automation and control of industrial equipment usually requires the use of real-time testing. However, the equipment required is often expensive, which means that such tests are often not viable. The objective of this work was therefore to develop an embedded platform for the distributed real-time simulation of dynamic systems. This platform, called the Real-Time Simulator for Dynamic Systems (RTSDS), could be applied in both industrial and academic environments. In industrial applications, the RTSDS could be used to optimize embedded control algorithms. In the academic sphere, it could be used to support research into new embedded solutions for automation and control and could also be used as a tool to assist in undergraduate and postgraduate teaching related to the development of projects concerning on-board control systems.

  11. A platform for testing and comparing of real-time decision-support algorithms in mobile environments.

    PubMed

    Khitrov, Maxim Y; Rutishauser, Matthew; Montgomery, Kevin; Reisner, Andrew T; Reifman, Jaques

    2009-01-01

    The unavailability of a flexible system for realtime testing of decision-support algorithms in a pre-hospital clinical setting has limited their use. In this study, we describe a plug-and-play platform for real-time testing of decision-support algorithms during the transport of trauma casualties en route to a hospital. The platform integrates a standard-of-care vital-signs monitor, which collects numeric and waveform physiologic time-series data, with a rugged ultramobile personal computer. The computer time-stamps and stores data received from the monitor, and performs analysis on the collected data in real-time. Prior to field deployment, we assessed the performance of each component of the platform by using an emulator to simulate a number of possible fault scenarios that could be encountered in the field. Initial testing with the emulator allowed us to identify and fix software inconsistencies and showed that the platform can support a quick development cycle for real-time decision-support algorithms.

  12. Time-critical multirate scheduling using contemporary real-time operating system services

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.

    1983-01-01

    Although real-time operating systems provide many of the task control services necessary to process time-critical applications (i.e., applications with fixed, invariant deadlines), it may still be necessary to provide a scheduling algorithm at a level above the operating system in order to coordinate a set of synchronized, time-critical tasks executing at different cyclic rates. The scheduling requirements for such applications and develops scheduling algorithms using services provided by contemporary real-time operating systems.

  13. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  14. ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION

    SciTech Connect

    Chen, Bin; Maddumage, Prasad; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie

    2015-05-15

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  15. Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad

    2015-05-01

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  16. Fast algorithm for automatically computing Strahler stream order

    USGS Publications Warehouse

    Lanfear, Kenneth J.

    1990-01-01

    An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.

  17. Tabled Execution in Scheme

    SciTech Connect

    Willcock, J J; Lumsdaine, A; Quinlan, D J

    2008-08-19

    Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.

  18. Ground-based time-guidance algorithm for control of airplanes in a time-metered air traffic control environment: A piloted simulation study

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Imbert, N.

    1986-01-01

    The rapidly increasing costs of flight operations and the requirement for increased fuel conservation have made it necessary to develop more efficient ways to operate airplanes and to control air traffic for arrivals and departures to the terminal area. One concept of controlling arrival traffic through time metering has been jointly studied and evaluated by NASA and ONERA/CERT in piloted simulation tests. From time errors attained at checkpoints, airspeed and heading commands issued by air traffic control were computed by a time-guidance algorithm for the pilot to follow that would cause the airplane to cross a metering fix at a preassigned time. These tests resulted in the simulated airplane crossing a metering fix with a mean time error of 1.0 sec and a standard deviation of 16.7 sec when the time-metering algorithm was used. With mismodeled winds representing the unknown in wind-aloft forecasts and modeling form, the mean time error attained when crossing the metering fix was increased and the standard deviation remained approximately the same. The subject pilots reported that the airspeed and heading commands computed in the guidance concept were easy to follow and did not increase their work load above normal levels.

  19. SNSMIL, a real-time single molecule identification and localization algorithm for super-resolution fluorescence microscopy

    PubMed Central

    Tang, Yunqing; Dai, Luru; Zhang, Xiaoming; Li, Junbai; Hendriks, Johnny; Fan, Xiaoming; Gruteser, Nadine; Meisenberg, Annika; Baumann, Arnd; Katranidis, Alexandros; Gensch, Thomas

    2015-01-01

    Single molecule localization based super-resolution fluorescence microscopy offers significantly higher spatial resolution than predicted by Abbe’s resolution limit for far field optical microscopy. Such super-resolution images are reconstructed from wide-field or total internal reflection single molecule fluorescence recordings. Discrimination between emission of single fluorescent molecules and background noise fluctuations remains a great challenge in current data analysis. Here we present a real-time, and robust single molecule identification and localization algorithm, SNSMIL (Shot Noise based Single Molecule Identification and Localization). This algorithm is based on the intrinsic nature of noise, i.e., its Poisson or shot noise characteristics and a new identification criterion, QSNSMIL, is defined. SNSMIL improves the identification accuracy of single fluorescent molecules in experimental or simulated datasets with high and inhomogeneous background. The implementation of SNSMIL relies on a graphics processing unit (GPU), making real-time analysis feasible as shown for real experimental and simulated datasets. PMID:26098742

  20. A Memetic Algorithm for the Location-Based Continuously Operating Reference Stations Placement Problem in Network Real-Time Kinematic.

    PubMed

    Tang, Maolin

    2015-10-01

    Network real-time kinematic (NRTK) is a technology that can provide centimeter-level accuracy positioning services in real-time, and it is enabled by a network of continuously operating reference stations (CORS). The location-oriented CORS placement problem is an important problem in the design of a NRTK as it will directly affect not only the installation and operational cost of the NRTK, but also the quality of positioning services provided by the NRTK. This paper presents a memetic algorithm (MA) for the location-oriented CORS placement problem, which hybridizes the powerful explorative search capacity of a genetic algorithm and the efficient and effective exploitative search capacity of a local optimization. Experimental results have shown that the MA has better performance than existing approaches. In this paper, we also conduct an empirical study about the scalability of the MA, effectiveness of the hybridization technique and selection of crossover operator in the MA.

  1. A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1989-01-01

    Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.

  2. A Real-Time Position-Locating Algorithm for CCD-Based Sunspot Tracking

    NASA Technical Reports Server (NTRS)

    Taylor, Jaime R.

    1996-01-01

    NASA Marshall Space Flight Center's (MSFC) EXperimental Vector Magnetograph (EXVM) polarimeter measures the sun's vector magnetic field. These measurements are taken to improve understanding of the sun's magnetic field in the hopes to better predict solar flares. Part of the procedure for the EXVM requires image motion stabilization over a period of a few minutes. A high speed tracker can be used to reduce image motion produced by wind loading on the EXVM, fluctuations in the atmosphere and other vibrations. The tracker consists of two elements, an image motion detector and a control system. The image motion detector determines the image movement from one frame to the next and sends an error signal to the control system. For the ground based application to reduce image motion due to atmospheric fluctuations requires an error determination at the rate of at least 100 hz. It would be desirable to have an error determination rate of 1 kHz to assure that higher rate image motion is reduced and to increase the control system stability. Two algorithms are presented that are typically used for tracking. These algorithms are examined for their applicability for tracking sunspots, specifically their accuracy if only one column and one row of CCD pixels are used. To examine the accuracy of this method two techniques are used. One involves moving a sunspot image a known distance with computer software, then applying the particular algorithm to see how accurately it determines this movement. The second technique involves using a rate table to control the object motion, then applying the algorithms to see how accurately each determines the actual motion. Results from these two techniques are presented.

  3. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  4. Generation of a supervised classification algorithm for time-series variable stars with an application to the LINEAR dataset

    NASA Astrophysics Data System (ADS)

    Johnston, K. B.; Oluseyi, H. M.

    2017-04-01

    With the advent of digital astronomy, new benefits and new problems have been presented to the modern day astronomer. While data can be captured in a more efficient and accurate manner using digital means, the efficiency of data retrieval has led to an overload of scientific data for processing and storage. This paper will focus on the construction and application of a supervised pattern classification algorithm for the identification of variable stars. Given the reduction of a survey of stars into a standard feature space, the problem of using prior patterns to identify new observed patterns can be reduced to time-tested classification methodologies and algorithms. Such supervised methods, so called because the user trains the algorithms prior to application using patterns with known classes or labels, provide a means to probabilistically determine the estimated class type of new observations. This paper will demonstrate the construction and application of a supervised classification algorithm on variable star data. The classifier is applied to a set of 192,744 LINEAR data points. Of the original samples, 34,451 unique stars were classified with high confidence (high level of probability of being the true class).

  5. A new auroral boundary determination algorithm based on observations from TIMED/GUVI and DMSP/SSUSI

    NASA Astrophysics Data System (ADS)

    Ding, Guang-Xing; He, Fei; Zhang, Xiao-Xin; Chen, Bo

    2017-02-01

    An automatic auroral boundary determination algorithm is proposed in this study based on the partial auroral oval images from the Global Ultraviolet Imager (GUVI) aboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics satellite and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI) aboard the Defense Meteorological Satellite Program (DMSP F16). This algorithm based on the fuzzy local information C-means clustering segmentation can be used to extract the auroral oval poleward and equatorward boundaries from merged images with filled gaps from both GUVI and SSUSI. Both extracted poleward and equatorward boundary locations are used to fit the global shape of the auroral oval with a off-center quasi-elliptical fitting technique. Comparison of the extracted auroral oval boundaries with those identified from the DMSP SSJ observations demonstrates that this new proposed algorithm can reliably be used to construct the global configuration of auroral ovals under different geomagnetic activities at different local times. The statistical errors of magnetic latitudes of the fitted auroral oval boundaries were generally less than 3° at 2 sigma and indicate that the the fitted boundaries agree better with b2e and b5e than b1e and b6 boundaries. This proposed algorithm provides us with a useful tool to extract the global shape and position of the auroral oval from the partial auroral images.

  6. Passive Fourier-transform infrared spectroscopy of chemical plumes: an algorithm for quantitative interpretation and real-time background removal

    NASA Astrophysics Data System (ADS)

    Polak, Mark L.; Hall, Jeffrey L.; Herr, Kenneth C.

    1995-08-01

    We present a ratioing algorithm for quantitative analysis of the passive Fourier-transform infrared spectrum of a chemical plume. We show that the transmission of a near-field plume is given by tau plume = (Lobsd - Lbb-plume)/(Lbkgd - Lbb-plume), where tau plume is the frequency-dependent transmission of the plume, L obsd is the spectral radiance of the scene that contains the plume, Lbkgd is the spectral radiance of the same scene without the plume, and Lbb-plume is the spectral radiance of a blackbody at the plume temperature. The algorithm simultaneously achieves background removal, elimination of the spectrometer internal signature, and quantification of the plume spectral transmission. It has applications to both real-time processing for plume visualization and quantitative measurements of plume column densities. The plume temperature (Lbb-plume ), which is not always precisely known, can have a profound effect on the quantitative interpretation of the algorithm and is discussed in detail. Finally, we provide an illustrative example of the use of the algorithm on a trichloroethylene and acetone plume.

  7. Real-Time and Off-Line Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in California and Switzerland

    NASA Astrophysics Data System (ADS)

    Cua, G. B.; Fischer, M.; Heaton, T. H.; Wiemer, S.; Giardini, D.

    2008-12-01

    The Virtual Seismologist (VS) method is a regional network-based approach to earthquake early warning that estimates earthquake magnitude and location based on the available envelopes of ground motion amplitudes from the seismic network monitoring a given region, predefined prior information, and appropriate attenuation relationships. Bayes' theorem allows for the introduction of prior information (possibilities include network topology or station health status, regional hazard maps, earthquake forecasts, the Gutenberg- Richter magnitude-frequency relationship) into the source estimation process. Peak ground motion amplitudes (PGA and PGV) are then predicted throughout the region of interest using the estimated magnitude and location and the appropriate attenuation relationships. Implementation of the VS algorithm in California and Switzerland is funded by the Seismic Early Warning for Europe (SAFER) project. The VS algorithm is one of three early warning algorithms whose real-time performance on California datasets is being evaluated as part of the California Integrated Seismic Network (CISN) early warning effort funded by the United States Geological Survey (USGS). Real-time operation of the VS codes at the Southern California Seismic Network (SCSN) began in July 2008, and will be extended to Northern California in the following months. In Switzerland, the VS codes have been run on offline waveform data from over 125 earthquakes recorded by the Swiss Digital Seismic Network (SDSN) and the Swiss Strong Motion Network (SSMN). We discuss the performance of the VS codes on these datasets in terms of available warning time and accuracy of magnitude and location estimates.

  8. Symbolic Execution Enhanced System Testing

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Pasareanu, Corina S.; Raman, Vishwanath

    2012-01-01

    We describe a testing technique that uses information computed by symbolic execution of a program unit to guide the generation of inputs to the system containing the unit, in such a way that the unit's, and hence the system's, coverage is increased. The symbolic execution computes unit constraints at run-time, along program paths obtained by system simulations. We use machine learning techniques treatment learning and function fitting to approximate the system input constraints that will lead to the satisfaction of the unit constraints. Execution of system input predictions either uncovers new code regions in the unit under analysis or provides information that can be used to improve the approximation. We have implemented the technique and we have demonstrated its effectiveness on several examples, including one from the aerospace domain.

  9. An efficient algorithm based on splitting for the time integration of the Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Blanes, Sergio; Casas, Fernando; Murua, Ander

    2015-12-01

    We present a practical algorithm based on symplectic splitting methods intended for the numerical integration in time of the Schrödinger equation when the Hamiltonian operator is either time-independent or changes slowly with time. In the later case, the evolution operator can be effectively approximated in a step-by-step manner: first divide the time integration interval in sufficiently short subintervals, and then successively solve a Schrödinger equation with a different time-independent Hamiltonian operator in each of these subintervals. When discretized in space, the Schrödinger equation with the time-independent Hamiltonian operator obtained for each time subinterval can be recast as a classical linear autonomous Hamiltonian system corresponding to a system of coupled harmonic oscillators. The particular structure of this linear system allows us to construct a set of highly efficient schemes optimized for different precision requirements and time intervals. Sharp local error bounds are obtained for the solution of the linear autonomous Hamiltonian system considered in each time subinterval. Our schemes can be considered, in this setting, as polynomial approximations to the matrix exponential in a similar way as methods based on Chebyshev and Taylor polynomials. The theoretical analysis, supported by numerical experiments performed for different time-independent Hamiltonians, indicates that the new methods are more efficient than schemes based on Chebyshev polynomials for all tolerances and time interval lengths. The algorithm we present automatically selects, for each time subinterval, the most efficient splitting scheme (among several new optimized splitting methods) for a prescribed error tolerance and given estimates of the upper and lower bounds of the eigenvalues of the discretized version of the Hamiltonian operator.

  10. Scalable asynchronous execution of cellular automata

    NASA Astrophysics Data System (ADS)

    Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo

    2016-10-01

    The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.

  11. A Framework and Algorithms for Multivariate Time Series Analytics (MTSA): Learning, Monitoring, and Recommendation

    ERIC Educational Resources Information Center

    Ngan, Chun-Kit

    2013-01-01

    Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…

  12. The Senior Executive Service

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A major innovation of the Civil Service Reform Act of 1978 was the creation of a Senior Executive Service (SES). The purpose of the SES is both simple and bold: to attract executives of the highest quality into Federal service and to retain them by providing outstanding opportunities for career growth and reward. The SES is intended to: provide greater authority in managing executive resources; attract and retain highly competent executives, and assign them where they will effectively accomplish their missions and best use their talents; provide for systematic development of executives; hold executives accountable for individual and organizational performance; reward outstanding performers and remove poor performers; and provide for an executive merit system free of inappropriate personnel practices and arbitrary actions. This Handbook summarizes the key features of the SES at NASA. It is intended as a special welcome to new appointees and also as a general reference document. It contains an overview of SES management at NASA, including the Executive Resources Board and the Performance Review Board, which are mandated by law to carry out key SES functions. In addition, assistance is provided by a Senior Executive Committee in certain reviews and decisions and by Executive Position Managers in day-to-day administration and oversight.

  13. Conceptualization and Operationalization of Executive Function

    ERIC Educational Resources Information Center

    Baggetta, Peter; Alexander, Patricia A.

    2016-01-01

    Executive function is comprised of different behavioral and cognitive elements and is considered to play a significant role in learning and academic achievement. Educational researchers frequently study the construct. However, because of its complexity functionally, the research on executive function can at times be both confusing and…

  14. Compound faults detection of rolling element bearing based on the generalized demodulation algorithm under time-varying rotational speed

    NASA Astrophysics Data System (ADS)

    Zhao, Dezun; Li, Jianyong; Cheng, Weidong; Wen, Weigang

    2016-09-01

    Multi-fault detection of the rolling element bearing under time-varying rotational speed presents a challenging issue due to its complexity, disproportion and interaction. Computed order analysis (COA) is one of the most effective approaches to remove the influences of speed fluctuation, and detect all the features of multi-fault. However, many interference components in the envelope order spectrum may lead to false diagnosis results, in addition, the deficiencies of computational accuracy and efficiency also cannot be neglected. To address these issues, a novel method for compound faults detection of rolling element bearing based on the generalized demodulation (GD) algorithm is proposed in this paper. The main idea of the proposed method is to exploit the unique property of the generalized demodulation algorithm in transforming an interested instantaneous frequency trajectory of compound faults bearing signal into a line paralleling to the time axis, and then the FFT algorithm can be directly applied to the transformed signal. This novel method does not need angular resampling algorithm which is the key step of the computed order analysis, and is hence free from the deficiencies of computational error and efficiency. On the other hand, it only acts on the instantaneous fault characteristic frequency trends in envelope signal of multi-fault bearing which include rich fault information, and is hence free from irrelevant items interferences. Both simulated and experimental faulty bearing signal analysis validate that the proposed method is effective and reliable on the compound faults detection of rolling element bearing under variable rotational speed conditions. The comprehensive comparison with the computed order analysis further shows that the proposed method produces higher accurate results in less computation time.

  15. Real-time image-processing algorithm for markerless tumour tracking using X-ray fluoroscopic imaging

    PubMed Central

    2014-01-01

    Objective: To ensure accuracy in respiratory-gating treatment, X-ray fluoroscopic imaging is used to detect tumour position in real time. Detection accuracy is strongly dependent on image quality, particularly positional differences between the patient and treatment couch. We developed a new algorithm to improve the quality of images obtained in X-ray fluoroscopic imaging and report the preliminary results. Methods: Two oblique X-ray fluoroscopic images were acquired using a dynamic flat panel detector (DFPD) for two patients with lung cancer. The weighting factor was applied to the DFPD image in respective columns, because most anatomical structures, as well as the treatment couch and port cover edge, were aligned in the superior–inferior direction when the patient lay on the treatment couch. The weighting factors for the respective columns were varied until the standard deviation of the pixel values within the image region was minimized. Once the weighting factors were calculated, the quality of the DFPD image was improved by applying the factors to multiframe images. Results: Applying the image-processing algorithm produced substantial improvement in the quality of images, and the image contrast was increased. The treatment couch and irradiation port edge, which were not related to a patient's position, were removed. The average image-processing time was 1.1 ms, showing that this fast image processing can be applied to real-time tumour-tracking systems. Conclusion: These findings indicate that this image-processing algorithm improves the image quality in patients with lung cancer and successfully removes objects not related to the patient. Advances in knowledge: Our image-processing algorithm might be useful in improving gated-treatment accuracy. PMID:24661056

  16. Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E

    NASA Technical Reports Server (NTRS)

    Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie

    2001-01-01

    In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.

  17. A Time Integration Algorithm Based on the State Transition Matrix for Structures with Time Varying and Nonlinear Properties

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2003-01-01

    A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.

  18. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  19. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  20. An Algorithm for Real-Time Optimal Photocurrent Estimation Including Transient Detection for Resource-Constrained Imaging Applications

    NASA Astrophysics Data System (ADS)

    Zemcov, Michael; Crill, Brendan; Ryan, Matthew; Staniszewski, Zak

    2016-06-01

    Mega-pixel charge-integrating detectors are common in near-IR imaging applications. Optimal signal-to-noise ratio estimates of the photocurrents, which are particularly important in the low-signal regime, are produced by fitting linear models to sequential reads of the charge on the detector. Algorithms that solve this problem have a long history, but can be computationally intensive. Furthermore, the cosmic ray background is appreciable for these detectors in Earth orbit, particularly above the Earth’s magnetic poles and the South Atlantic Anomaly, and on-board reduction routines must be capable of flagging affected pixels. In this paper, we present an algorithm that generates optimal photocurrent estimates and flags random transient charge generation from cosmic rays, and is specifically designed to fit on a computationally restricted platform. We take as a case study the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), a NASA Small Explorer astrophysics experiment concept, and show that the algorithm can easily fit in the resource-constrained environment of such a restricted platform. Detailed simulations of the input astrophysical signals and detector array performance are used to characterize the fitting routines in the presence of complex noise properties and charge transients. We use both Hubble Space Telescope Wide Field Camera-3 and Wide-field Infrared Survey Explorer to develop an empirical understanding of the susceptibility of near-IR detectors in low earth orbit and build a model for realistic cosmic ray energy spectra and rates. We show that our algorithm generates an unbiased estimate of the true photocurrent that is identical to that from a standard line fitting package, and characterize the rate, energy, and timing of both detected and undetected transient events. This algorithm has significant potential for imaging with charge-integrating detectors in astrophysics, earth science, and remote

  1. Optimized Schwarz algorithms for solving time-harmonic Maxwell's equations discretized by a hybridizable discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    He, Yu-Xuan; Li, Liang; Lanteri, Stéphane; Huang, Ting-Zhu

    2016-03-01

    This work is concerned with the development of numerical methods for the simulation of time-harmonic electromagnetic wave propagation problems. A hybridizable discontinuous Galerkin (HDG) method is adopted for the discretization of the two-dimensional time-harmonic Maxwell's equations on a triangular mesh. A distinguishing feature of the present work is that this discretization method is employed at the subdomain level in the framework of a Schwarz-type domain decomposition algorithm (DDM). We show that HDG method naturally couples with a Schwarz method relying on optimized transmission conditions. The presented numerical results show the effectiveness of the optimized DDM-HDG method.

  2. Bayesian Algorithm Implementation in a Real Time Exposure Assessment Model on Benzene with Calculation of Associated Cancer Risks

    PubMed Central

    Sarigiannis, Dimosthenis A.; Karakitsios, Spyros P.; Gotti, Alberto; Papaloukas, Costas L.; Kassomenos, Pavlos A.; Pilidis, Georgios A.

    2009-01-01

    The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded) determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs) was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK) risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural). Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations. PMID:22399936

  3. On Gamma Ray Instrument On-Board Data Processing Real-Time Computational Algorithm for Cosmic Ray Rejection

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Hunter, Stanley D.; Hanu, Andrei R.; Sheets, Teresa B.

    2016-01-01

    Richard O. Duda and Peter E. Hart of Stanford Research Institute in [1] described the recurring problem in computer image processing as the detection of straight lines in digitized images. The problem is to detect the presence of groups of collinear or almost collinear figure points. It is clear that the problem can be solved to any desired degree of accuracy by testing the lines formed by all pairs of points. However, the computation required for n=NxM points image is approximately proportional to n2 or O(n2), becoming prohibitive for large images or when data processing cadence time is in milliseconds. Rosenfeld in [2] described an ingenious method due to Hough [3] for replacing the original problem of finding collinear points by a mathematically equivalent problem of finding concurrent lines. This method involves transforming each of the figure points into a straight line in a parameter space. Hough chose to use the familiar slope-intercept parameters, and thus his parameter space was the two-dimensional slope-intercept plane. A parallel Hough transform running on multi-core processors was elaborated in [4]. There are many other proposed methods of solving a similar problem, such as sampling-up-the-ramp algorithm (SUTR) [5] and algorithms involving artificial swarm intelligence techniques [6]. However, all state-of-the-art algorithms lack in real time performance. Namely, they are slow for large images that require performance cadence of a few dozens of milliseconds (50ms). This problem arises in spaceflight applications such as near real-time analysis of gamma ray measurements contaminated by overwhelming amount of traces of cosmic rays (CR). Future spaceflight instruments such as the Advanced Energetic Pair Telescope instrument (AdEPT) [7-9] for cosmos gamma ray survey employ large detector readout planes registering multitudes of cosmic ray interference events and sparse science gamma ray event traces' projections. The AdEPT science of interest is in the

  4. A Search and Exploration of Multi-Exoplanet Systems Via Transit Timing Variation (TTV) Algorithms for the K2 Mission

    NASA Astrophysics Data System (ADS)

    Dholakia, Shishir; Dholakia, Shashank; Cody, Ann Marie

    2017-01-01

    We use the K2 mission to search for and analyze multi-planet systems with the goal of performing a scalable search for multi-planet systems using the transit timing variation (TTV) method. We developed an algorithm in Python to perform a search for synodic TTVs from multi-planet systems. The algorithm analyzes images taken by the K2 mission, creates light curves, and searches for TTVs on the order of a few minutes for every star in the images. We detected 4 potential TTV signals of which 3 are possible new discoveries. One of the systems has known multiple transiting planets and exhibits TTVs consistent with theoretical and previously published TTVs from n-body simulations. Another exoplanet system exhibits possible TTVs consistent with at least two giant planets. Our results demonstrate that a search for TTVs with the K2 mission is possible, though difficult.

  5. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  6. Multi-objective teaching-learning-based optimization algorithm for reducing carbon emissions and operation time in turning operations

    NASA Astrophysics Data System (ADS)

    Lin, Wenwen; Yu, D. Y.; Wang, S.; Zhang, Chaoyong; Zhang, Sanqiang; Tian, Huiyu; Luo, Min; Liu, Shengqiang

    2015-07-01

    In addition to energy consumption, the use of cutting fluids, deposition of worn tools and certain other manufacturing activities can have environmental impacts. All these activities cause carbon emission directly or indirectly; therefore, carbon emission can be used as an environmental criterion for machining systems. In this article, a direct method is proposed to quantify the carbon emissions in turning operations. To determine the coefficients in the quantitative method, real experimental data were obtained and analysed in MATLAB. Moreover, a multi-objective teaching-learning-based optimization algorithm is proposed, and two objectives to minimize carbon emissions and operation time are considered simultaneously. Cutting parameters were optimized by the proposed algorithm. Finally, the analytic hierarchy process was used to determine the optimal solution, which was found to be more environmentally friendly than the cutting parameters determined by the design of experiments method.

  7. Multi-heuristic dynamic task allocation using genetic algorithms in a heterogeneous distributed system.

    PubMed

    Page, Andrew J; Keane, Thomas M; Naughton, Thomas J

    2010-07-01

    We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms.

  8. Online Planning Algorithms for POMDPs

    PubMed Central

    Ross, Stéphane; Pineau, Joelle; Paquet, Sébastien; Chaib-draa, Brahim

    2009-01-01

    Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. PMID:19777080

  9. Real-time MRI-guided hyperthermia treatment using a fast adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Stakhursky, Vadim L.; Arabe, Omar; Cheng, Kung-Shan; MacFall, James; Maccarini, Paolo; Craciunescu, Oana; Dewhirst, Mark; Stauffer, Paul; Das, Shiva K.

    2009-04-01

    Magnetic resonance (MR) imaging is promising for monitoring and guiding hyperthermia treatments. The goal of this work is to investigate the stability of an algorithm for online MR thermal image guided steering and focusing of heat into the target volume. The control platform comprised a four-antenna mini-annular phased array (MAPA) applicator operating at 140 MHz (used for extremity sarcoma heating) and a GE Signa Excite 1.5 T MR system, both of which were driven by a control workstation. MR proton resonance frequency shift images acquired during heating were used to iteratively update a model of the heated object, starting with an initial finite element computed model estimate. At each iterative step, the current model was used to compute a focusing vector, which was then used to drive the next iteration, until convergence. Perturbation of the driving vector was used to prevent the process from stalling away from the desired focus. Experimental validation of the performance of the automatic treatment platform was conducted with two cylindrical phantom studies, one homogeneous and one muscle equivalent with tumor tissue (conductivity 50% higher) inserted, with initial focal spots being intentionally rotated 90° and 50° away from the desired focus, mimicking initial setup errors in applicator rotation. The integrated MR-HT treatment platform steered the focus of heating into the desired target volume in two quite different phantom tissue loads which model expected patient treatment configurations. For the homogeneous phantom test where the target was intentionally offset by 90° rotation of the applicator, convergence to the proper phase focus in the target occurred after 16 iterations of the algorithm. For the more realistic test with a muscle equivalent phantom with tumor inserted with 50° applicator displacement, only two iterations were necessary to steer the focus into the tumor target. Convergence improved the heating efficacy (the ratio of integral

  10. Expected Utility Distributions for Flexible, Contingent Execution

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Washington, Richard

    2000-01-01

    This paper presents a method for using expected utility distributions in the execution of flexible, contingent plans. A utility distribution maps the possible start times of an action to the expected utility of the plan suffix starting with that action. The contingent plan encodes a tree of possible courses of action and includes flexible temporal constraints and resource constraints. When execution reaches a branch point, the eligible option with the highest expected utility at that point in time is selected. The utility distributions make this selection sensitive to the runtime context, yet still efficient. Our approach uses predictions of action duration uncertainty as well as expectations of resource usage and availability to determine when an action can execute and with what probability. Execution windows and probabilities inevitably change as execution proceeds, but such changes do not invalidate the cached utility distributions, thus, dynamic updating of utility information is minimized.

  11. Signal detection on spontaneous reports of adverse events following immunisation: a comparison of the performance of a disproportionality-based algorithm and a time-to-onset-based algorithm

    PubMed Central

    van Holle, Lionel; Bauchau, Vincent

    2014-01-01

    Purpose Disproportionality methods measure how unexpected the observed number of adverse events is. Time-to-onset (TTO) methods measure how unexpected the TTO distribution of a vaccine-event pair is compared with what is expected from other vaccines and events. Our purpose is to compare the performance associated with each method. Methods For the disproportionality algorithms, we defined 336 combinations of stratification factors (sex, age, region and year) and threshold values of the multi-item gamma Poisson shrinker (MGPS). For the TTO algorithms, we defined 18 combinations of significance level and time windows. We used spontaneous reports of adverse events recorded for eight vaccines. The vaccine product labels were used as proxies for true safety signals. Algorithms were ranked according to their positive predictive value (PPV) for each vaccine separately; amedian rank was attributed to each algorithm across vaccines. Results The algorithm with the highest median rank was based on TTO with a significance level of 0.01 and a time window of 60 days after immunisation. It had an overall PPV 2.5 times higher than for the highest-ranked MGPS algorithm, 16th rank overall, which was fully stratified and had a threshold value of 0.8. A TTO algorithm with roughly the same sensitivity as the highest-ranked MGPS had better specificity but longer time-to-detection. Conclusions Within the scope of this study, the majority of the TTO algorithms presented a higher PPV than for any MGPS algorithm. Considering the complementarity of TTO and disproportionality methods, a signal detection strategy combining them merits further investigation. PMID:24038719

  12. Case study of isosurface extraction algorithm performance

    SciTech Connect

    Sutton, P M; Hansen, C D; Shen, H; Schikore, D

    1999-12-14

    Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.

  13. Optimal seismic design of reinforced concrete structures under time-history earthquake loads using an intelligent hybrid algorithm

    NASA Astrophysics Data System (ADS)

    Gharehbaghi, Sadjad; Khatibinia, Mohsen

    2015-03-01

    A reliable seismic-resistant design of structures is achieved in accordance with the seismic design codes by designing structures under seven or more pairs of earthquake records. Based on the recommendations of seismic design codes, the average time-history responses (ATHR) of structure is required. This paper focuses on the optimal seismic design of reinforced concrete (RC) structures against ten earthquake records using a hybrid of particle swarm optimization algorithm and an intelligent regression model (IRM). In order to reduce the computational time of optimization procedure due to the computational efforts of time-history analyses, IRM is proposed to accurately predict ATHR of structures. The proposed IRM consists of the combination of the subtractive algorithm (SA), K-means clustering approach and wavelet weighted least squares support vector machine (WWLS-SVM). To predict ATHR of structures, first, the input-output samples of structures are classified by SA and K-means clustering approach. Then, WWLS-SVM is trained with few samples and high accuracy for each cluster. 9- and 18-storey RC frames are designed optimally to illustrate the effectiveness and practicality of the proposed IRM. The numerical results demonstrate the efficiency and computational advantages of IRM for optimal design of structures subjected to time-history earthquake loads.

  14. Two executives, one career.

    PubMed

    Cunningham, Cynthia R; Murray, Shelley S

    2005-02-01

    For six years, Cynthia Cunningham and Shelley Murray shared an executive job at Fleet Bank. One desk, one chair, one computer, one telephone, and one voice-mail account. To their clients and colleagues, they were effectively one person, though one person with the strengths and ideas of two, seamlessly handing projects back and forth. Although their department was dissolved after the bank merged with Bank of America, the two continue to consider themselves a package-they have one resume, and they are seeking their next opportunity together. Their choice to share a job was not only a quality-of-life decision but one intended to keep their careers on course: "Taking two separate part-time jobs would have thrown us completely off track" they write in this first-person account."We're both ambitious people, and neither of us wanted just a job. We wanted careers" In this article, the two highly motivated women reveal their determination to manage the demands of both family and career. Flextime,telecommuting, and compressed workweeks are just some of the options open to executives seeking greater work/ life balance, and the job share, as described by Cunningham and Murray, could well be the next solution for those wishing to avoid major trade-offs between their personal and professional lives. Cunningham and Murray describe in vivid detail how they structured their unusual arrangement, how they sold themselves to management, and the hurdles they faced along the way. Theirs is a win-win story, for the company and for them.

  15. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  16. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze