Science.gov

Sample records for algorithm execution time

  1. Execution time supports for adaptive scientific algorithms on distributed memory machines

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  2. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  3. Computer architecture for efficient algorithmic executions in real-time systems: new technology for avionics systems and advanced space vehicles

    SciTech Connect

    Carroll, C.C.; Youngblood, J.N.; Saha, A.

    1987-12-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  4. The relation of scalability and execution time

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1995-01-01

    Scalability has been used extensively as a de facto performance criterion for evaluating parallel algorithms and architectures. However, for many, scalability has theoretical interests only since it does not reveal execution time. In this paper, the relation between scalability and execution time is carefully studied. Results show that the isospeed scalability well characterizes the variation of execution time: smaller scalability leads to larger execution time, the same scalability leads to the same execution time, etc. Three algorithms from scientific computing are implemented on an Intel Paragon and an IBM SP2 parallel computer. Experimental and theoretical results show that scalability is an important, distinct metric for parallel and distributed systems, and may be as important as execution time in a scalable parallel and distributed environment.

  5. Resource Selection Using Execution and Queue Wait Time Predictions

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Wong, Parkson; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We developed techniques to predict application execution times for instance-based learning with an average error of 33% of average run time. We developed techniques to predict queue wait times that included a simulation of scheduling algorithms and execution time predictions. We implemented these techniques for the NAS Origin cluster.

  6. Resource Selection Using Execution and Queue Wait Time Predictions

    NASA Technical Reports Server (NTRS)

    Warren, Smith; Wong, Parkson; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Computational grids provide users with many possible places to execute their applications. We wish to help users select where to run their applications by providing predictions of the execution times of applications on space shared parallel computers and predictions of when scheduling systems for such parallel computers will start applications. Our predictions are based on instance based learning techniques and simulations of scheduling algorithms. We find that our execution time prediction techniques have an average error of 37 percent of the execution times for trace data recorded from SGI Origins at NASA Ames Research Center and that this error is 67 percent lower than the error of user estimates. We also find that the error when predicting how long applications will wait in scheduling queues is 95 percent of mean queue wait times when using our execution time predictions and this is 57 percent lower than if we use user execution time estimates.

  7. Attitude-Control Algorithm for Minimizing Maneuver Execution Errors

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet

    2008-01-01

    A G-RAC attitude-control algorithm is used to minimize maneuver execution error in a spacecraft with a flexible appendage when said spacecraft must induce translational momentum by firing (in open loop) large thrusters along a desired direction for a given period of time. The controller is dynamic with two integrators and requires measurement of only the angular position and velocity of the spacecraft. The global stability of the closed-loop system is guaranteed without having access to the states describing the dynamics of the appendage and with severe saturation in the available torque. Spacecraft apply open-loop thruster firings to induce a desired translational momentum with an extended appendage. This control algorithm will assist this maneuver by stabilizing the attitude dynamics around a desired orientation, and consequently minimize the maneuver execution errors.

  8. Execution time support for scientific programs on distributed memory machines

    NASA Technical Reports Server (NTRS)

    Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey

    1990-01-01

    Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.

  9. Linking from Earth Observation Data and Products to executable web-based Algorithms based on Metadata

    NASA Astrophysics Data System (ADS)

    Eberle, J.; Hese, S.; Schmullius, C.

    2012-04-01

    The Siberian Earth System Science Cluster (SIB-ESS-C) is a spatial data infrastructure for earth observation products for Siberia implemented at the University of Jena (Germany), Department for Earth Observation. Using standards for data discovery, data access and data processing, earth observation data is described with standards from the International Organizsation for Standardization (ISO). ISO-19115 and ISO-19115 part 2 was used to to describe this data and products in a very detail. Working with raster data every band was described precisely to have a possibility to link this kind of data as input data of algorithms, implemented as web processing services. With an integrated catalogue system data can be searched, found, visualised and downloaded. But the integration of raw earth observation data and derived products leads also to a processing of this data, for example if a user wants another projection, another format or further analysis. Having a pool of algorithms it should be possible to find an algorithm which can be used with given or available input data. For the description of each in- and output of an algorithm, metadata similar to the description of possible input data was used. Unfortunately in- and outputs of OGC-compliant Web Processing Services doesn't have much metadata, so own metadata keys were integrated to fit with the before described ISO-compliant metadata. The main objective of this work was to implement a system which knows what kind of data is available and what data is needed to run algorithms. The final system knows which algorithms can be executed with the available data and which data is needed to execute specific algorithms. Whenever new data is ingested into the system, it executes automatically applicable algorithms to have final earth observation products or further analysis on the fly and in real-time.

  10. Motor and Executive Control in Repetitive Timing of Brief Intervals

    ERIC Educational Resources Information Center

    Holm, Linus; Ullen, Fredrik; Madison, Guy

    2013-01-01

    We investigated the causal role of executive control functions in the production of brief time intervals by means of a concurrent task paradigm. To isolate the influence of executive functions on timing from motor coordination effects, we dissociated executive load from the number of effectors used in the dual task situation. In 3 experiments,…

  11. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  12. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  13. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  14. Time Monitoring and Executive Functioning in Children and Adults

    ERIC Educational Resources Information Center

    Mantyla, Timo; Carelli, Maria Grazia; Forman, Helen

    2007-01-01

    This study examined time-based prospective memory performance in relation to individual and developmental differences in executive functioning. School-age children and young adults completed six experimental tasks that tapped three basic components of executive functioning: inhibition, updating, and mental shifting. Monitoring performance was…

  15. Programming real-time executives in higher order language

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.

    1982-01-01

    Methods by which real-time executive programs can be implemented in a higher order language are discussed, using HAL/S and Path Pascal languages as program examples. Techniques are presented by which noncyclic tasks can readily be incorporated into the executive system. Situations are shown where the executive system can fail to meet its task scheduling and yet be able to recover either by rephasing the clock or stacking the information for later processing. The concept of deadline processing is shown to enable more effective mixing of time and information synchronized systems.

  16. An algorithm to find critical execution paths of software based on complex network

    NASA Astrophysics Data System (ADS)

    Huang, Guoyan; Zhang, Bing; Ren, Rong; Ren, Jiadong

    2015-01-01

    The critical execution paths play an important role in software system in terms of reducing the numbers of test date, detecting the vulnerabilities of software structure and analyzing software reliability. However, there are no efficient methods to discover them so far. Thus in this paper, a complex network-based software algorithm is put forward to find critical execution paths (FCEP) in software execution network. First, by analyzing the number of sources and sinks in FCEP, software execution network is divided into AOE subgraphs, and meanwhile, a Software Execution Network Serialization (SENS) approach is designed to generate execution path set in each AOE subgraph, which not only reduces ring structure's influence on path generation, but also guarantees the nodes' integrity in network. Second, according to a novel path similarity metric, similarity matrix is created to calculate the similarity among sets of path sequences. Third, an efficient method is taken to cluster paths through similarity matrices, and the maximum-length path in each cluster is extracted as the critical execution path. At last, a set of critical execution paths is derived. The experimental results show that the FCEP algorithm is efficient in mining critical execution path under software complex network.

  17. Timing, Sequencing, and Executive Control in Repetitive Movement Production.

    ERIC Educational Resources Information Center

    Krampe, Ralf Th.; Mayr, Ulrich; Kliegl, Reinhold

    2005-01-01

    The authors demonstrate that the timing and sequencing of target durations require low-level timing and executive control. Sixteen young (M-sub(age) = 19 years) and 16 older (M-sub(age) = 70 years) adults participated in 2 experiments. In Experiment 1, individual mean-variance functions for low-level timing (isochronous tapping) and the sequencing…

  18. SAMPLE: software for VAX FORTRAN execution timing

    SciTech Connect

    Lowe, L.H.

    1983-01-01

    SAMPLE is a set of subroutines in use at the Los Alamos National Laboratory for collecting CPU timings of various FORTRAN program sections - usually individual subroutines. These measurements have been useful in making programs run faster. The presentation includes a description of the software and examples of its use. The software is available on the directory (SAMPLE) of the VAX SIG tape.

  19. Real Time Monitor of Grid job executions

    NASA Astrophysics Data System (ADS)

    Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.

    2010-04-01

    In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.

  20. Predicting Operator Execution Times Using CogTool

    NASA Technical Reports Server (NTRS)

    Santiago-Espada, Yamira; Latorella, Kara A.

    2013-01-01

    Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.

  1. TVFMCATS. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, R.K.

    1999-05-01

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor`s hardware.

  2. Time Variant Floating Mean Counting Algorithm

    1999-06-03

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.

  3. Kalman plus weights: a time scale algorithm

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2001-01-01

    KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.

  4. Flexible, efficient and robust algorithm for parallel execution and coupling of components in a framework

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor

    2006-05-01

    We describe a general algorithm suitable for executing and coupling components of a software framework on a parallel computer. The requirements of a flexible, efficient and robust algorithm are defined precisely, and the motivation for the requirements is demonstrated on several examples. In short, the requirements are the following: (i) the algorithm should allow arbitrary distribution of processors among the components, (ii) it should allow arbitrary coupling schedule between the components, (iii) it should not use any inter-processor communication other than already required by the components and their couplings, and (iv) it should never get into a dead-lock. We show that the proposed algorithm based on the Temporal and Predefined Ordering of Tasks (TPOT) satisfies all these requirements. The TPOT algorithm has been implemented in the Space Weather Modeling Framework. The flexibility and efficiency of the algorithm is demonstrated with several examples.

  5. Discrete Event Execution with One-Sided and Two-Sided GVT Algorithms on 216,000 Processor Cores

    SciTech Connect

    Perumalla, Kalyan S; Park, Alfred J; Tipparaju, Vinod

    2014-01-01

    Global virtual time (GVT) computation is a key determinant of the efficiency and runtime dynamics of parallel discrete event simulations (PDES), especially on large-scale parallel platforms. Here, three execution modes of a generalized GVT computation algorithm are studied on high-performance parallel computing systems: (1) a synchronous GVT algorithm that affords ease of implementation, (2) an asynchronous GVT algorithm that is more complex to implement but can relieve blocking latencies, and (3) a variant of the asynchronous GVT algorithm to exploit one-sided communication in extant supercomputing platforms. Performance results are presented of implementations of these algorithms on up to 216,000 cores of a Cray XT5 system, exercised on a range of parameters: optimistic and conservative synchronization, fine- to medium-grained event computation, synthetic and non-synthetic applications, and different lookahead values. Performance of up to 54 billion events executed per second is registered. Detailed PDES-specific runtime metrics are presented to further the understanding of tightly-coupled discrete event dynamics on massively parallel platforms.

  6. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  7. Univariate time series forecasting algorithm validation

    NASA Astrophysics Data System (ADS)

    Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan

    2014-12-01

    Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.

  8. Real-Time Projection to Verify Plan Success During Execution

    NASA Technical Reports Server (NTRS)

    Wagner, David A.; Dvorak, Daniel L.; Rasmussen, Robert D.; Knight, Russell L.; Morris, John R.; Bennett, Matthew B.; Ingham, Michel D.

    2012-01-01

    The Mission Data System provides a framework for modeling complex systems in terms of system behaviors and goals that express intent. Complex activity plans can be represented as goal networks that express the coordination of goals on different state variables of the system. Real-time projection extends the ability of this system to verify plan achievability (all goals can be satisfied over the entire plan) into the execution domain so that the system is able to continuously re-verify a plan as it is executed, and as the states of the system change in response to goals and the environment. Previous versions were able to detect and respond to goal violations when they actually occur during execution. This new capability enables the prediction of future goal failures; specifically, goals that were previously found to be achievable but are no longer achievable due to unanticipated faults or environmental conditions. Early detection of such situations enables operators or an autonomous fault response capability to deal with the problem at a point that maximizes the available options. For example, this system has been applied to the problem of managing battery energy on a lunar rover as it is used to explore the Moon. Astronauts drive the rover to waypoints and conduct science observations according to a plan that is scheduled and verified to be achievable with the energy resources available. As the astronauts execute this plan, the system uses this new capability to continuously re-verify the plan as energy is consumed to ensure that the battery will never be depleted below safe levels across the entire plan.

  9. Plan generation and hard real-time execution with application to safe, autonomous flight

    NASA Astrophysics Data System (ADS)

    Atkins, Ella Marie

    We address the problem of constructing and executing control plans for safe, fully-autonomous operation within a complex real-time domain where the combination of an incomplete knowledge base, limited computational resources, and hard real-time deadlines precludes the success of traditional planning and scheduling algorithms. To meet hard deadlines with limited computational resources, we employ a stochastic world model to prioritize the state-space during planning, then utilize feedback from the scheduler to set a threshold below which the planner removes unlikely states from consideration in order to generate a schedulable plan. Our probabilistic planning algorithm minimizes domain knowledge size and explicitly provides for the construction of real-time control plans. Although approximate instead of optimal, the representational efficiency gained by our approach makes it a viable alternative to the well-established Markov Decision Process for complex real-time problem domains. When resource limits require plan modification, our heuristic algorithms for communicating task resource utilization information from real-time scheduler to planner provide a novel method for directing the expensive planner backtracking process specifically toward a schedulable plan. The tradeoff in ignoring reachable but unlikely states, as well as allowing incomplete domain knowledge, is that we must now provide explicitly for the detection of and reaction to these "unexpected" states our system may encounter while executing a plan. By detecting such unhandled states and caching contingency plans for events which, though unlikely, could lead to catastrophic failure, we can still guarantee system safety in the probabilistic sense. Ultimately, however, we are still constrained by plan-execution resource limits regardless of the tradeoff algorithms employed. We apply the resultant architecture (CIRCA-II) to simulated autonomous aircraft flight and demonstrate its utility for intelligently

  10. Execution environment for intelligent real-time control systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, Janos

    1987-01-01

    Modern telerobot control technology requires the integration of symbolic and non-symbolic programming techniques, different models of parallel computations, and various programming paradigms. The Multigraph Architecture, which has been developed for the implementation of intelligent real-time control systems is described. The layered architecture includes specific computational models, integrated execution environment and various high-level tools. A special feature of the architecture is the tight coupling between the symbolic and non-symbolic computations. It supports not only a data interface, but also the integration of the control structures in a parallel computing environment.

  11. Timing issues in the distributed execution of Ada programs

    NASA Technical Reports Server (NTRS)

    Volz, Richard A.; Mudge, Trevor N.

    1987-01-01

    This paper examines, in the context of distributed execution, the meaning of Ada constructs involving time. In the process, unresolved questions of interpretation and problems with the implementation of a consistent notion of time across a network are uncovered. It is observed that there are two Ada mechanisms that can involve a distributed sense of time: the conditional entry call, and the timed entry call. It is shown that a recent interpretation by the Language Maintenance Committee resolves the questions for the conditional entry calls but results in an anomaly for timed entry calls. A detailed discussion of alternative implementations for the timed entry call is made, and it is aruged that: (1) timed entry calls imply a common sense of time between the machines holding the calling and called tasks; and (2) the measurement of time for the expiration of the delay and the decision of whether or not to perform the rendezvous should be made on the machine holding the called task. The need to distinguish the unreadiness of the called task from timeouts caused by network failure is pointed out. Finally, techniques for realizing a single sense of time across the distributed system (at least to within an acceptable degree of uncertainty) are also discussed.

  12. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  13. A class of kernel based real-time elastography algorithms.

    PubMed

    Kibria, Md Golam; Hasan, Md Kamrul

    2015-08-01

    In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. PMID:25929595

  14. Conversion-Integration of MSFC Nonlinear Signal Diagnostic Analysis Algorithms for Realtime Execution of MSFC's MPP Prototype System

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1996-01-01

    NASA's advanced propulsion system Small Scale Magnetic Disturbances/Advanced Technology Development (SSME/ATD) has been undergoing extensive flight certification and developmental testing, which involves large numbers of health monitoring measurements. To enhance engine safety and reliability, detailed analysis and evaluation of the measurement signals are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce the risk of catastrophic system failures and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. During the development of SSME, ASRI participated in the research and development of several advanced non- linear signal diagnostic methods for health monitoring and failure prediction in turbomachinery components. However, due to the intensive computational requirement associated with such advanced analysis tasks, current SSME dynamic data analysis and diagnostic evaluation is performed off-line following flight or ground test with a typical diagnostic turnaround time of one to two days. The objective of MSFC's MPP Prototype System is to eliminate such 'diagnostic lag time' by achieving signal processing and analysis in real-time. Such an on-line diagnostic system can provide sufficient lead time to initiate corrective action and also to enable efficient scheduling of inspection, maintenance and repair activities. The major objective of this project was to convert and implement a number of advanced nonlinear diagnostic DSP algorithms in a format consistent with that required for integration into the Vanderbilt Multigraph Architecture (MGA) Model Based Programming environment. This effort will allow the real-time execution of these algorithms using the MSFC MPP Prototype System. ASRI has completed the software conversion and integration of a sequence of nonlinear signal analysis techniques specified in the SOW for real-time

  15. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform.

    PubMed

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A; Kiefer, Richard; Rasmussen, Luke V; Pathak, Jyotishman; Denny, Joshua C; Thompson, William K

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  16. A Decompositional Approach to Executing Quality Data Model Algorithms on the i2b2 Platform

    PubMed Central

    Mo, Huan; Jiang, Guoqian; Pacheco, Jennifer A.; Kiefer, Richard; Rasmussen, Luke V.; Pathak, Jyotishman; Denny, Joshua C.; Thompson, William K.

    2016-01-01

    The Quality Data Model (QDM) is an established standard for representing electronic clinical quality measures on electronic health record (EHR) repositories. The Informatics for Integrated Biology and the Bedside (i2b2) is a widely used platform for implementing clinical data repositories. However, translation from QDM to i2b2 is challenging, since QDM allows for complex queries beyond the capability of single i2b2 messages. We have developed an approach to decompose complex QDM algorithms into workflows of single i2b2 messages, and execute them on the KNIME data analytics platform. Each workflow operation module is composed of parameter lists, a template for the i2b2 message, an mechanism to create parameter updates, and a web service call to i2b2. The communication between workflow modules relies on passing keys ofi2b2 result sets. As a demonstration of validity, we describe the implementation and execution of a type 2 diabetes mellitus phenotype algorithm against an i2b2 data repository. PMID:27570665

  17. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.

    PubMed

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-01-01

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs. PMID:27589753

  18. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-01

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus. PMID:26575558

  19. EDITORIAL: Special issue on time scale algorithms

    NASA Astrophysics Data System (ADS)

    Matsakis, Demetrios; Tavella, Patrizia

    2008-12-01

    This special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the tutorials presented on the first day. The symposium was attended by 76 persons, from every continent except Antarctica, by students as well as senior scientists, and hosted by the Real Instituto y Observatorio de la Armada (ROA) in San Fernando, Spain, whose staff further enhanced their nation's high reputation for hospitality. Although a timescale can be simply defined as a weighted average of clocks, whose purpose is to measure time better than any individual clock, timescale theory has long been and continues to be a vibrant field of research that has both followed and helped to create advances in the art of timekeeping. There is no perfect timescale algorithm, because every one embodies a compromise involving user needs. Some users wish to generate a constant frequency, perhaps not necessarily one that is well-defined with respect to the definition of a second. Other users might want a clock which is as close to UTC or a particular reference clock as possible, or perhaps wish to minimize the maximum variation from that standard. In contrast to the steered timescales that would be required by those users, other users may need free-running timescales, which are independent of external information. While no algorithm can meet all these needs, every algorithm can benefit from some form of tuning. The optimal tuning, and even the optimal algorithm, can depend on the noise characteristics of the frequency standards, or of their comparison systems, the most precise and accurate of which are currently Two Way Satellite Time and Frequency Transfer (TWSTFT) and GPS carrier phase time transfer. The interest in time scale algorithms and its associated statistical methodology began around 40 years ago when the Allan variance appeared and when the metrological institutions started realizing ensemble atomic time using more than

  20. Integrated Planning: Consolidating Annual Facility Planning - More Time for Execution

    SciTech Connect

    Nelson, J. G.; R., L. Morton; Ramirez, C.; Morris, P. S.; McSwain, J. T.

    2011-02-02

    Previously, annual planning for Readiness in Technical Base and Facilities (RTBF) at the Nevada National Security Site (NNSS) was fragmented, disconnected, circular, and occurred constantly throughout the fiscal year (FY) comprising 9 of the 12 months, reducing the focus on implementation and execution. This required constant “looking back” instead of “looking forward.” In FY 2009, annual planning was consolidated into one comprehensive integrated plan (IP) for each facility/project, which comprised annual task planning/outyear budgeting, AMPs, and investment planning (i.e., TYIP). In FY 2010, the Risk Management Plans were added to the IPs. The integrated planning process achieved the following: 1) Eliminated fragmented, circular, planning and moved the plan to be more forward-looking; 2) Achieved a 90% reduction in schedule planning timeframe from 40 weeks (9 months) to 6 weeks; 3) Achieved an 80% reduction in cost from just under $1.0M to just over $200K, for a cost savings of nearly $800K (reduced combined effort from over 200 person-weeks to less than 40); 4) Reduced the number of plans generated from 21 plans (1 per facility per plan) per year to 8 plans per year (1 per facility plus 1 program-level IP); 5) Eliminated redundancy in common content between plans and improved consistency and overall quality; 6) Reduced the preparation time and cost of the FY 2010 SEP by 50% due to information provided in the IP; 7) Met the requirements for annual task planning, annual maintenance planning, ten-year investment planning, and risk management plans.

  1. EDITORIAL: Special issue on time scale algorithms

    NASA Astrophysics Data System (ADS)

    Matsakis, Demetrios; Tavella, Patrizia

    2008-12-01

    This special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the tutorials presented on the first day. The symposium was attended by 76 persons, from every continent except Antarctica, by students as well as senior scientists, and hosted by the Real Instituto y Observatorio de la Armada (ROA) in San Fernando, Spain, whose staff further enhanced their nation's high reputation for hospitality. Although a timescale can be simply defined as a weighted average of clocks, whose purpose is to measure time better than any individual clock, timescale theory has long been and continues to be a vibrant field of research that has both followed and helped to create advances in the art of timekeeping. There is no perfect timescale algorithm, because every one embodies a compromise involving user needs. Some users wish to generate a constant frequency, perhaps not necessarily one that is well-defined with respect to the definition of a second. Other users might want a clock which is as close to UTC or a particular reference clock as possible, or perhaps wish to minimize the maximum variation from that standard. In contrast to the steered timescales that would be required by those users, other users may need free-running timescales, which are independent of external information. While no algorithm can meet all these needs, every algorithm can benefit from some form of tuning. The optimal tuning, and even the optimal algorithm, can depend on the noise characteristics of the frequency standards, or of their comparison systems, the most precise and accurate of which are currently Two Way Satellite Time and Frequency Transfer (TWSTFT) and GPS carrier phase time transfer. The interest in time scale algorithms and its associated statistical methodology began around 40 years ago when the Allan variance appeared and when the metrological institutions started realizing ensemble atomic time using more than

  2. Enhancing real-time flight simulation execution by intercepting Run-Time Library calls

    NASA Technical Reports Server (NTRS)

    Reinbachs, Namejs

    1993-01-01

    Standard operating system input-output (I/O) procedures impose a large time penalty on real-time program execution. These procedures are generally invoked by way of Run-Time Library (RTL) calls. To reduce the time penalty, as well as add flexibility, a technique has been developed to dynamically intercept these calls. The design and implementation of this technique, as applied to FORTRAN WRITE statements, are described. Measured performance gains using this RTL intercept technique are on the order of 1000 percent.

  3. Developing a data element repository to support EHR-driven phenotype algorithm authoring and execution.

    PubMed

    Jiang, Guoqian; Kiefer, Richard C; Rasmussen, Luke V; Solbrig, Harold R; Mo, Huan; Pacheco, Jennifer A; Xu, Jie; Montague, Enid; Thompson, William K; Denny, Joshua C; Chute, Christopher G; Pathak, Jyotishman

    2016-08-01

    The Quality Data Model (QDM) is an information model developed by the National Quality Forum for representing electronic health record (EHR)-based electronic clinical quality measures (eCQMs). In conjunction with the HL7 Health Quality Measures Format (HQMF), QDM contains core elements that make it a promising model for representing EHR-driven phenotype algorithms for clinical research. However, the current QDM specification is available only as descriptive documents suitable for human readability and interpretation, but not for machine consumption. The objective of the present study is to develop and evaluate a data element repository (DER) for providing machine-readable QDM data element service APIs to support phenotype algorithm authoring and execution. We used the ISO/IEC 11179 metadata standard to capture the structure for each data element, and leverage Semantic Web technologies to facilitate semantic representation of these metadata. We observed there are a number of underspecified areas in the QDM, including the lack of model constraints and pre-defined value sets. We propose a harmonization with the models developed in HL7 Fast Healthcare Interoperability Resources (FHIR) and Clinical Information Modeling Initiatives (CIMI) to enhance the QDM specification and enable the extensibility and better coverage of the DER. We also compared the DER with the existing QDM implementation utilized within the Measure Authoring Tool (MAT) to demonstrate the scalability and extensibility of our DER-based approach. PMID:27392645

  4. Response-Time Variability Is Related to Parent Ratings of Inattention, Hyperactivity, and Executive Function

    ERIC Educational Resources Information Center

    Gomez-Guerrero, Lorena; Martin, Cristina Dominguez; Mairena, Maria Angeles; Di Martino, Adriana; Wang, Jing; Mendelsohn, Alan L.; Dreyer, Benard P.; Isquith, Peter K.; Gioia, Gerard; Petkova, Eva; Castellanos, F. Xavier

    2011-01-01

    Objective: Individuals with ADHD are often characterized as inconsistent across many contexts. ADHD is also associated with deficits in executive function. We examined the relationships between response time (RT) variability on five brief computer tasks to parents' ratings of ADHD-related features and executive function in a group of children with…

  5. Easy and hard testbeds for real-time search algorithms

    SciTech Connect

    Koenig, S.; Simmons, R.G.

    1996-12-31

    Although researchers have studied which factors influence the behavior of traditional search algorithms, currently not much is known about how domain properties influence the performance of real-time search algorithms. In this paper we demonstrate, both theoretically and experimentally, that Eulerian state spaces (a super set of undirected state spaces) are very easy for some existing real-time search algorithms to solve: even real-time search algorithms that can be intractable, in general, are efficient for Eulerian state spaces. Because traditional real-time search testbeds (such as the eight puzzle and gridworlds) are Eulerian, they cannot be used to distinguish between efficient and inefficient real-time search algorithms. It follows that one has to use non-Eulerian domains to demonstrate the general superiority of a given algorithm. To this end, we present two classes of hard-to-search state spaces and demonstrate the performance of various real-time search algorithms on them.

  6. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  7. Less-structured time in children's daily lives predicts self-directed executive functioning

    PubMed Central

    Barker, Jane E.; Semenov, Andrei D.; Michaelson, Laura; Provan, Lindsay S.; Snyder, Hannah R.; Munakata, Yuko

    2014-01-01

    Executive functions (EFs) in childhood predict important life outcomes. Thus, there is great interest in attempts to improve EFs early in life. Many interventions are led by trained adults, including structured training activities in the lab, and less-structured activities implemented in schools. Such programs have yielded gains in children's externally-driven executive functioning, where they are instructed on what goal-directed actions to carry out and when. However, it is less clear how children's experiences relate to their development of self-directed executive functioning, where they must determine on their own what goal-directed actions to carry out and when. We hypothesized that time spent in less-structured activities would give children opportunities to practice self-directed executive functioning, and lead to benefits. To investigate this possibility, we collected information from parents about their 6–7 year-old children's daily, annual, and typical schedules. We categorized children's activities as “structured” or “less-structured” based on categorization schemes from prior studies on child leisure time use. We assessed children's self-directed executive functioning using a well-established verbal fluency task, in which children generate members of a category and can decide on their own when to switch from one subcategory to another. The more time that children spent in less-structured activities, the better their self-directed executive functioning. The opposite was true of structured activities, which predicted poorer self-directed executive functioning. These relationships were robust (holding across increasingly strict classifications of structured and less-structured time) and specific (time use did not predict externally-driven executive functioning). We discuss implications, caveats, and ways in which potential interpretations can be distinguished in future work, to advance an understanding of this fundamental aspect of growing up

  8. Global convergence analysis of a discrete time nonnegative ICA algorithm.

    PubMed

    Ye, Mao

    2006-01-01

    When the independent sources are known to be nonnegative and well-grounded, which means that they have a nonzero pdf in the region of zero, Oja and Plumbley have proposed a "Nonnegative principal component analysis (PCA)" algorithm to separate these positive sources. Generally, it is very difficult to prove the convergence of a discrete-time independent component analysis (ICA) learning algorithm. However, by using the skew-symmetry property of this discrete-time "Nonnegative PCA" algorithm, if the learning rate satisfies suitable condition, the global convergence of this discrete-time algorithm can be proven. Simulation results are employed to further illustrate the advantages of this theory. PMID:16526495

  9. Time optimal route planning algorithm of LBS online navigation

    NASA Astrophysics Data System (ADS)

    Li, Yong; Bao, Shitai; Su, Kui; Fang, Qiushui; Yang, Jingfeng

    2011-02-01

    This paper proposes a time optimal route planning optimization algorithm in the mode of LBS online navigation based on the improved Dijkstra algorithms. Combined with the returning real-time location information by on-line users' handheld terminals, the algorithm can satisfy requirement of the optimal time in the mode of LBS online navigation. A navigation system is developed and applied in actual navigation operations. Operating results show that the algorithm could form a reasonable coordination on the basis of shortest route and fastest velocity in the requirement of optimal time. The algorithm could also store the calculated real-time route information in the cache to improve the efficiency of route planning and to reduce the planning time-consuming.

  10. Time Perception, Phonological Skills and Executive Function in Children with Dyslexia and/or ADHD Symptoms

    ERIC Educational Resources Information Center

    Gooch, Debbie; Snowling, Margaret; Hulme, Charles

    2011-01-01

    Background: Deficits in time perception (the ability to judge the duration of time intervals) have been found in children with both attention-deficit/hyperactivity disorder (ADHD) and dyslexia. This paper investigates time perception, phonological skills and executive functions in children with dyslexia and/or ADHD symptoms (AS). Method: Children…

  11. 28 CFR 26.3 - Date, time, place, and method of execution.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Date, time, place, and method of... Implementation of Death Sentences in Federal Cases § 26.3 Date, time, place, and method of execution. (a) Except... time designated by the Director of the Federal Bureau of Prisons, which date shall be no sooner than...

  12. Making Time for Instructional Leadership. Volume 1: Executive Summary

    ERIC Educational Resources Information Center

    Goldring, Ellen; Grissom, Jason A.; Neumerski, Christine M.; Murphy, Joseph; Blissett, Richard; Porter, Andy

    2015-01-01

    This three-volume report describes the "SAM (School Administration Manager) process," an approach that about 700 schools around the nation are using to direct more of principals' time and effort to improve teaching and learning in classrooms. Research has shown that a principal's instructional leadership is second only to teaching among…

  13. Processing Time Shifts Affects the Execution of Motor Responses

    ERIC Educational Resources Information Center

    Sell, Andrea J.; Kaschak, Michael P.

    2011-01-01

    We explore whether time shifts in text comprehension are represented spatially. Participants read sentences involving past or future events and made sensibility judgment responses in one of two ways: (1) moving toward or away from their body and (2) pressing the toward or away buttons without moving. Previous work suggests that spatial…

  14. Combing the Communication Hairball: Visualizing Parallel Execution Traces using Logical Time.

    PubMed

    Isaacs, Katherine E; Bremer, Peer-Timo; Jusufi, Ilir; Gamblin, Todd; Bhatele, Abhinav; Schulz, Martin; Hamann, Bernd

    2014-12-01

    With the continuous rise in complexity of modern supercomputers, optimizing the performance of large-scale parallel programs is becoming increasingly challenging. Simultaneously, the growth in scale magnifies the impact of even minor inefficiencies--potentially millions of compute hours and megawatts in power consumption can be wasted on avoidable mistakes or sub-optimal algorithms. This makes performance analysis and optimization critical elements in the software development process. One of the most common forms of performance analysis is to study execution traces, which record a history of per-process events and interprocess messages in a parallel application. Trace visualizations allow users to browse this event history and search for insights into the observed performance behavior. However, current visualizations are difficult to understand even for small process counts and do not scale gracefully beyond a few hundred processes. Organizing events in time leads to a virtually unintelligible conglomerate of interleaved events and moderately high process counts overtax even the largest display. As an alternative, we present a new trace visualization approach based on transforming the event history into logical time inferred directly from happened-before relationships. This emphasizes the code's structural behavior, which is much more familiar to the application developer. The original timing data, or other information, is then encoded through color, leading to a more intuitive visualization. Furthermore, we use the discrete nature of logical timelines to cluster processes according to their local behavior leading to a scalable visualization of even long traces on large process counts. We demonstrate our system using two case studies on large-scale parallel codes. PMID:26356949

  15. Two linear time, low overhead algorithms for graph layout

    2008-01-10

    The software comprises two algorithms designed to perform a 2D layout of a graph structure in time linear with respect to the vertices and edges in the graph, whereas most other layout algorithms have a running time that is quadratic with respect to the number of vertices or greater. Although these layout algorithms run in a fraction of the time as their competitors, they provide competitive results when applied to most real-world graphs. These algorithmsmore » also have a low constant running time and small memory footprint, making them useful for small to large graphs.« less

  16. A fast and Robust Algorithm for general inequality/equality constrained minimum time problems

    SciTech Connect

    Briessen, B.; Sadegh, N.

    1995-12-01

    This paper presents a new algorithm for solving general inequality/equality constrained minimum time problems. The algorithm`s solution time is linear in the number of Runge-Kutta steps and the number of parameters used to discretize the control input history. The method is being applied to a three link redundant robotic arm with torque bounds, joint angle bounds, and a specified tip path. It solves case after case within a graphical user interface in which the user chooses the initial joint angles and the tip path with a mouse. Solve times are from 30 to 120 seconds on a Hewlett Packard workstation. A zero torque history is always used in the initial guess, and the algorithm has never crashed, indicating its robustness. The algorithm solves for a feasible solution for large trajectory execution time t{sub f} and then reduces t{sub f} and then reduces t{sub f} by a small amount and re-solves. The fixed time re- solve uses a new method of finding a near-minimum-2-norm solution to a set of linear equations and inequalities that achieves quadratic convegence to a feasible solution of the full nonlinear problem.

  17. The time course effect of moderate intensity exercise on response execution and response inhibition.

    PubMed

    Joyce, Jennifer; Graydon, Jan; McMorris, Terry; Davranche, Karen

    2009-10-01

    This research aimed to investigate the time course effect of a moderate steady-state exercise session on response execution and response inhibition using a stop-task paradigm. Ten participants performed a stop-signal task whilst cycling at a carefully controlled workload intensity (40% of maximal aerobic power), immediately following exercise and 30min after exercise cessation. Results showed that moderate exercise enhances a subjects' ability to execute responses under time pressure (shorter Go reaction time, RT without a change in accuracy) but also enhances a subjects' ability to withhold ongoing motor responses (shorter stop-signal RT). The present outcomes reveal that the beneficial effect of exercise is neither limited to motor response tasks, nor to cognitive tasks performed during exercise. Beneficial effects of exercise remain present on both response execution and response inhibition performance for up to 52min after exercise cessation. PMID:19346049

  18. Two criteria for the selection of assembly plans - Maximizing the flexibility of sequencing the assembly tasks and minimizing the assembly time through parallel execution of assembly tasks

    NASA Technical Reports Server (NTRS)

    Homem De Mello, Luiz S.; Sanderson, Arthur C.

    1991-01-01

    The authors introduce two criteria for the evaluation and selection of assembly plans. The first criterion is to maximize the number of different sequences in which the assembly tasks can be executed. The second criterion is to minimize the total assembly time through simultaneous execution of assembly tasks. An algorithm that performs a heuristic search for the best assembly plan over the AND/OR graph representation of assembly plans is discussed. Admissible heuristics for each of the two criteria introduced are presented. Some implementation issues that affect the computational efficiency are addressed.

  19. Simulating the time-dependent Schr"odinger equation with a quantum lattice-gas algorithm

    NASA Astrophysics Data System (ADS)

    Prezkuta, Zachary; Coffey, Mark

    2007-03-01

    Quantum computing algorithms promise remarkable improvements in speed or memory for certain applications. Currently, the Type II (or hybrid) quantum computer is the most feasible to build. This consists of a large number of small Type I (pure) quantum computers that compute with quantum logic, but communicate with nearest neighbors in a classical way. The arrangement thus formed is suitable for computations that execute a quantum lattice gas algorithm (QLGA). We report QLGA simulations for both the linear and nonlinear time-dependent Schr"odinger equation. These evidence the stable, efficient, and at least second order convergent properties of the algorithm. The simulation capability provides a computational tool for applications in nonlinear optics, superconducting and superfluid materials, Bose-Einstein condensates, and elsewhere.

  20. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  1. Filter model based dwell time algorithm for ion beam figuring

    NASA Astrophysics Data System (ADS)

    Li, Yun; Xing, Tingwen; Jia, Xin; Wei, Haoming

    2010-10-01

    The process of Ion Beam Figuring (IBF) can be described by a two-dimensional convolution equation which including dwell time. Solving the dwell time is a key problem in IBF. Theoretically, the dwell time can be solved from a two-dimensional deconvolution. However, it is often ill-posed]; the suitable solution of that is hard to get. In this article, a dwell time algorithm is proposed, depending on the characters of IBF. Usually, the Beam Removal Function (BRF) in IBF is Gaussian, which can be regarded as a headstand Gaussian filter. In its stop-band, the filter has various filtering abilities for various frequencies. The dwell time algorithm proposed in this article is just based on this concept. The Curved Surface Smooth Extension (CSSE) method and Fast Fourier Transform (FFT) algorithm are also used. The simulation results show that this algorithm is high precision, effective, and suitable for actual application.

  2. Embedded algorithms within an FPGA-based system to process nonlinear time series data

    NASA Astrophysics Data System (ADS)

    Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.

    2008-03-01

    This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better

  3. Prediction of switching time between movement preparation and execution by neural activity in monkey premotor cortex.

    PubMed

    Li, Hongbao; Liao, Yuxi; Wang, Yiwen; Zhang, Qiaosheng; Zhang, Shaomin; Zheng, Xiaoxiang

    2015-01-01

    Premotor cortex is a higher level cortex than primary motor cortex in movement controlling hierarchy, which contributes to the motor preparation and execution simultaneously during the planned movement. The mediation mechanism from movement preparation to execution has attracted many scientists' attention. Gateway hypothesis is one possible explanation that some neurons act as "gating" to release the movement intention at the "on-go" cue. We propose to utilize a local-learning based feature extraction method to target the neurons in premotor cortex, which functionally contribute mostly to the discrimination between motor preparation and execution without tuning information to either target or movement trajectory. Then the support vector machine is utilized to predict the single trial switching time. With top three functional "gating" neurons, the prediction accuracy rate of the switching time is above 90%, which indicates the potential of asynchronous BMI control using premotor cortical activity. PMID:26736827

  4. Method for run time hardware code profiling for algorithm acceleration

    NASA Astrophysics Data System (ADS)

    Matev, Vladimir; de la Torre, Eduardo; Riesgo, Teresa

    2009-05-01

    In this paper we propose a method for run time profiling of applications on instruction level by analysis of loops. Instead of looking for coarse grain blocks we concentrate on fine grain but still costly blocks in terms of execution times. Most code profiling is done in software by introducing code into the application under profile witch has time overhead, while in this work data for the position of a loop, loop body, size and number of executions is stored and analysed using a small non intrusive hardware block. The paper describes the system mapping to runtime reconfigurable systems. The fine grain code detector block synthesis results and its functionality verification are also presented in the paper. To demonstrate the concept MediaBench multimedia benchmark running on the chosen development platform is used.

  5. Performance of recovery time improvement algorithms for software RAIDs

    SciTech Connect

    Riegel, J.; Menon, Jai

    1996-12-31

    A software RAID is a RAID implemented purely in software running on a host computer. One problem with software RAIDs is that they do not have access to special hardware such as NVRAM. Thus, software RAIDs may need to check every parity group of an array for consistency following a host crash or power failure. This process of checking parity groups is called recovery, and results in long delays when the software RAID is restarted. In this paper, we review two algorithms to reduce this recovery time for software RAIDs: the PGS Bitmap algorithm we proposed in and the List Algorithm proposed in. We compare the performance of these two algorithms using trace-driven simulations. Our results show that the PGS Bitmap Algorithm can reduce recovery time by a factor of 12 with a response time penalty of less than 1%, or by a factor of 50 with a response time penalty of less than 2%, and a memory requirement of around 9 Kbytes. The List Algorithm can reduce recovery time by a factor of 50 but cannot achieve a response time penalty of less than 16%.

  6. A linear-time algorithm for reconstructing zero-recombinant haplotype configuration on a pedigree

    PubMed Central

    2012-01-01

    Background When studying genetic diseases in which genetic variations are passed on to offspring, the ability to distinguish between paternal and maternal alleles is essential. Determining haplotypes from genotype data is called haplotype inference. Most existing computational algorithms for haplotype inference have been designed to use genotype data collected from individuals in the form of a pedigree. A haplotype is regarded as a hereditary unit and therefore input pedigrees are preferred that are free of mutational events and have a minimum number of genetic recombinational events. These ideas motivated the zero-recombinant haplotype configuration (ZRHC) problem, which strictly follows the Mendelian law of inheritance, namely that one haplotype of each child is inherited from the father and the other haplotype is inherited from the mother, both without any mutation. So far no linear-time algorithm for ZRHC has been proposed for general pedigrees, even though the number of mating loops in a human pedigree is usually very small and can be regarded as constant. Results Given a pedigree with n individuals, m marker loci, and k mating loops, we proposed an algorithm that can provide a general solution to the zero-recombinant haplotype configuration problem in O(kmn + k2m) time. In addition, this algorithm can be modified to detect inconsistencies within the genotype data without loss of efficiency. The proposed algorithm was subject to 12000 experiments to verify its performance using different (n, m) combinations. The value of k was uniformly distributed between zero and six throughout all experiments. The experimental results show a great linearity in terms of execution time in relation to input size when both n and m are larger than 100. For those experiments where n or m are less than 100, the proposed algorithm runs very fast, in thousandth to hundredth of a second, on a personal desktop computer. Conclusions We have developed the first deterministic linear-time

  7. The Time Course Effect of Moderate Intensity Exercise on Response Execution and Response Inhibition

    ERIC Educational Resources Information Center

    Joyce, Jennifer; Graydon, Jan; McMorris, Terry; Davranche, Karen

    2009-01-01

    This research aimed to investigate the time course effect of a moderate steady-state exercise session on response execution and response inhibition using a stop-task paradigm. Ten participants performed a stop-signal task whilst cycling at a carefully controlled workload intensity (40% of maximal aerobic power), immediately following exercise and…

  8. A Scheduling Algorithm for Replicated Real-Time Tasks

    NASA Technical Reports Server (NTRS)

    Yu, Albert C.; Lin, Kwei-Jay

    1991-01-01

    We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.

  9. Impacts of Time Delays on Distributed Algorithms for Economic Dispatch

    SciTech Connect

    Yang, Tao; Wu, Di; Sun, Yannan; Lian, Jianming

    2015-07-26

    Economic dispatch problem (EDP) is an important problem in power systems. It can be formulated as an optimization problem with the objective to minimize the total generation cost subject to the power balance constraint and generator capacity limits. Recently, several consensus-based algorithms have been proposed to solve EDP in a distributed manner. However, impacts of communication time delays on these distributed algorithms are not fully understood, especially for the case where the communication network is directed, i.e., the information exchange is unidirectional. This paper investigates communication time delay effects on a distributed algorithm for directed communication networks. The algorithm has been tested by applying time delays to different types of information exchange. Several case studies are carried out to evaluate the effectiveness and performance of the algorithm in the presence of time delays in communication networks. It is found that time delay effects have negative effects on the convergence rate, and can even result in an incorrect converge value or fail the algorithm to converge.

  10. Influence of timing algorithm on brachialankle pulse wave velocity measurement.

    PubMed

    Sun, Xin; Li, Ke; Ren, Hongwei; Li, Peng; Wang, Xinpei; Liu, Changchun

    2014-01-01

    The baPWV measurement is a non-invasive and convenient technique in an assessment of arterial stiffness. Despite its widespread application, the influence of different timing algorithms is still unclear. The present study was conducted to investigate the influence of six timing algorithms (MIN, MAX, D1, D2, MDP and INS) on the baPWV measurement and to evaluate the performance of them. Forty-five CAD patients and fifty-five healthy subjects were recruited in this study. A PVR acquisition apparatus was built up for baPWV measurement. The baPWV and other related parameters were calculated separately by the six timing algorithms. The influence and performance of the six algorithms was analyzed. The six timing algorithms generate significantly different baPWV values (left: F=29.036, P<0.001; right: F=40.076, P<0.001). In terms of reproducibility, the MAX has significantly higher CV value (≥ 18.6%) than the other methods, while the INS has the lowest CV value (≤ 2.7%). On the performance of classification, the INS produces the highest AUC values (left: 0.854; right: 0.872). The MIN and D2 also have a passable performance (AUC > 0.8). The choice of timing algorithm affects baPWV values and the quality of measurement. The INS method is recommended for baPWV measurement. PMID:24211905

  11. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    NASA Astrophysics Data System (ADS)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  12. IMPROVEMENTS TO THE TIME STEPPING ALGORITHM OF RELAP5-3D

    SciTech Connect

    Cumberland, R.; Mesina, G.

    2009-01-01

    The RELAP5-3D time step method is used to perform thermo-hydraulic and neutronic simulations of nuclear reactors and other devices. It discretizes time and space by numerically solving several differential equations. Previously, time step size was controlled by halving or doubling the size of a previous time step. This process caused the code to run slower than it potentially could. In this research project, the RELAP5-3D time step method was modifi ed to allow a new method of changing time steps to improve execution speed and to control error. The new RELAP5-3D time step method being studied involves making the time step proportional to the material courant limit (MCL), while insuring that the time step does not increase by more than a factor of two between advancements. As before, if a step fails or mass error is excessive, the time step is cut in half. To examine performance of the new method, a measure of run time and a measure of error were plotted against a changing MCL proportionality constant (m) in seven test cases. The removal of the upper time step limit produced a small increase in error, but a large decrease in execution time. The best value of m was found to be 0.9. The new algorithm is capable of producing a signifi cant increase in execution speed, with a relatively small increase in mass error. The improvements made are now under consideration for inclusion as a special option in the RELAP5-3D production code.

  13. Algorithmic properties of the midpoint predictor-corrector time integrator.

    SciTech Connect

    Rider, William J.; Love, Edward; Scovazzi, Guglielmo

    2009-03-01

    Algorithmic properties of the midpoint predictor-corrector time integration algorithm are examined. In the case of a finite number of iterations, the errors in angular momentum conservation and incremental objectivity are controlled by the number of iterations performed. Exact angular momentum conservation and exact incremental objectivity are achieved in the limit of an infinite number of iterations. A complete stability and dispersion analysis of the linearized algorithm is detailed. The main observation is that stability depends critically on the number of iterations performed.

  14. Reducing the Time Requirement of k-Means Algorithm

    PubMed Central

    Osamor, Victor Chukwudi; Adebiyi, Ezekiel Femi; Oyelade, Jelilli Olarenwaju; Doumbia, Seydou

    2012-01-01

    Traditional k-means and most k-means variants are still computationally expensive for large datasets, such as microarray data, which have large datasets with large dimension size d. In k-means clustering, we are given a set of n data points in d-dimensional space Rd and an integer k. The problem is to determine a set of k points in Rd, called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this work, we develop a novel k-means algorithm, which is simple but more efficient than the traditional k-means and the recent enhanced k-means. Our new algorithm is based on the recently established relationship between principal component analysis and the k-means clustering. We provided the correctness proof for this algorithm. Results obtained from testing the algorithm on three biological data and six non-biological data (three of these data are real, while the other three are simulated) also indicate that our algorithm is empirically faster than other known k-means algorithms. We assessed the quality of our algorithm clusters against the clusters of a known structure using the Hubert-Arabie Adjusted Rand index (ARIHA). We found that when k is close to d, the quality is good (ARIHA>0.8) and when k is not close to d, the quality of our new k-means algorithm is excellent (ARIHA>0.9). In this paper, emphases are on the reduction of the time requirement of the k-means algorithm and its application to microarray data due to the desire to create a tool for clustering and malaria research. However, the new clustering algorithm can be used for other clustering needs as long as an appropriate measure of distance between the centroids and the members is used. This has been demonstrated in this work on six non-biological data. PMID:23239974

  15. Implementation of and Ada real-time executive: A case study

    NASA Technical Reports Server (NTRS)

    Laird, James D.; Burton, Bruce A.; Koppes, Mary R.

    1986-01-01

    Current Ada language implementations and runtime environments are immature, unproven and are a key risk area for real-time embedded computer system (ECS). A test-case environment is provided in which the concerns of the real-time, ECS community are addressed. A priority driven executive is selected to be implemented in the Ada programming language. The model selected is representative of real-time executives tailored for embedded systems used missile, spacecraft, and avionics applications. An Ada-based design methodology is utilized, and two designs are considered. The first of these designs requires the use of vendor supplied runtime and tasking support. An alternative high-level design is also considered for an implementation requiring no vendor supplied runtime or tasking support. The former approach is carried through to implementation.

  16. Discrete-time minimal control synthesis adaptive algorithm

    NASA Astrophysics Data System (ADS)

    di Bernardo, M.; di Gennaro, F.; Olm, J. M.; Santini, S.

    2010-12-01

    This article proposes a discrete-time Minimal Control Synthesis (MCS) algorithm for a class of single-input single-output discrete-time systems written in controllable canonical form. As it happens with the continuous-time MCS strategy, the algorithm arises from the family of hyperstability-based discrete-time model reference adaptive controllers introduced in (Landau, Y. (1979), Adaptive Control: The Model Reference Approach, New York: Marcel Dekker, Inc.) and is able to ensure tracking of the states of a given reference model with minimal knowledge about the plant. The control design shows robustness to parameter uncertainties, slow parameter variation and matched disturbances. Furthermore, it is proved that the proposed discrete-time MCS algorithm can be used to control discretised continuous-time plants with the same performance features. Contrary to previous discrete-time implementations of the continuous-time MCS algorithm, here a formal proof of asymptotic stability is given for generic n-dimensional plants in controllable canonical form. The theoretical approach is validated by means of simulation results.

  17. Linear-time algorithms for scheduling on parallel processors

    SciTech Connect

    Monma, C.L.

    1982-01-01

    Linear-time algorithms are presented for several problems of scheduling n equal-length tasks on m identical parallel processors subject to precedence constraints. This improves upon previous time bounds for the maximum lateness problem with treelike precedence constraints, the number-of-late-tasks problem without precedence constraints, and the one machine maximum lateness problem with general precedence constraints. 5 references.

  18. ETD: an extended time delay algorithm for ventricular fibrillation detection.

    PubMed

    Kim, Jungyoon; Chu, Chao-Hsien

    2014-01-01

    Ventricular fibrillation (VF) is the most serious type of heart attack which requires quick detection and first aid to improve patients' survival rates. To be most effective in using wearable devices for VF detection, it is vital that the detection algorithms be accurate, robust, reliable and computationally efficient. Previous studies and our experiments both indicate that the time-delay (TD) algorithm has a high reliability for separating sinus rhythm (SR) from VF and is resistant to variable factors, such as window size and filtering method. However, it fails to detect some VF cases. In this paper, we propose an extended time-delay (ETD) algorithm for VF detection and conduct experiments comparing the performance of ETD against five good VF detection algorithms, including TD, using the popular Creighton University (CU) database. Our study shows that (1) TD and ETD outperform the other four algorithms considered and (2) with the same sensitivity setting, ETD improves upon TD in three other quality measures for up to 7.64% and in terms of aggregate accuracy, the ETD algorithm shows an improvement of 2.6% of the area under curve (AUC) compared to TD. PMID:25571480

  19. Algorithm Implementation for a Prototype Time-Encoded Signature Detector

    SciTech Connect

    Mercier, Theresa M.; Runkle, Robert C.; Stephens, Daniel L.; Hyronimus, Brian J.; Morris, Scott J.; Seifert, Allen; Wyatt, Cory R.

    2007-12-31

    The authors constructed a prototype Time-Encoded Signature (TES) system, complete with automated detection algorithms, useful for the detection of point-like, gamma-ray sources in search applications where detectors observe large variability in background count rates beyond statistical (Poisson) noise. The person-carried TES instrument consists of two Cesium Iodide scintillators placed on opposite sides of a Tungsten shield. This geometry mitigates systematic background variation, and induces a unique signature upon encountering point-like sources. This manuscript focuses on the development of detection algorithms that both identify point-source signatures and are computationally simple. The latter constraint derives from the instrument’s mobile (and thus low power) operation. The authors evaluated algorithms on both simulated and field data. The results of this analysis demonstrate the ability to detect sources at a wide range of source-detector distances using computationally simple algorithms.

  20. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    SciTech Connect

    Ha, Taeyoung . E-mail: tyha@math.snu.ac.kr; Shin, Changsoo . E-mail: css@model.snu.ac.kr

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.

  1. Supporting Real-Time Operations and Execution through Timeline and Scheduling Aids

    NASA Technical Reports Server (NTRS)

    Marquez, Jessica J.; Pyrzak, Guy; Hashemi, Sam; Ahmed, Samia; McMillin, Kevin Edward; Medwid, Joseph Daniel; Chen, Diana; Hurtle, Esten

    2013-01-01

    Since 2003, the NASA Ames Research Center has been actively involved in researching and advancing the state-of-the-art of planning and scheduling tools for NASA mission operations. Our planning toolkit SPIFe (Scheduling and Planning Interface for Exploration) has supported a variety of missions and field tests, scheduling activities for Mars rovers as well as crew on-board International Space Station and NASA earth analogs. The scheduled plan is the integration of all the activities for the day/s. In turn, the agents (rovers, landers, spaceships, crew) execute from this schedule while the mission support team members (e.g., flight controllers) follow the schedule during execution. Over the last couple of years, our team has begun to research and validate methods that will better support users during realtime operations and execution of scheduled activities. Our team utilizes human-computer interaction principles to research user needs, identify workflow processes, prototype software aids, and user test these. This paper discusses three specific prototypes developed and user tested to support real-time operations: Score Mobile, Playbook, and Mobile Assistant for Task Execution (MATE).

  2. Time-reversible molecular dynamics algorithms with bond constraints

    NASA Astrophysics Data System (ADS)

    Toxvaerd, Søren; Heilmann, Ole J.; Ingebrigtsen, Trond; Schrøder, Thomas B.; Dyre, Jeppe C.

    2009-08-01

    Time-reversible molecular dynamics algorithms with bond constraints are derived. The algorithms are stable with and without a thermostat and in double precision as well as in single-precision arithmetic. Time reversibility is achieved by applying a central-difference expression for the velocities in the expression for Gauss' principle of least constraint. The imposed time symmetry results in a quadratic expression for the Lagrange multiplier. For a system of complex molecules with connected constraints the corresponding set of coupled quadratic equations is easily solved by a consecutive iteration scheme. The algorithms were tested on two models. One is a dumbbell model of Toluene, the other system consists of molecules with four connected constraints forming a triangle and a branch point of constraints. The equilibrium particle distributions and the mean-square particle displacements for the dumbbell model were compared to the corresponding functions obtained by GROMACS. The agreement is perfect within statistical error.

  3. Linear-scaling source-sink algorithm for simulating time-resolved quantum transport and superconductivity

    NASA Astrophysics Data System (ADS)

    Weston, Joseph; Waintal, Xavier

    2016-04-01

    We report on a "source-sink" algorithm which allows one to calculate time-resolved physical quantities from a general nanoelectronic quantum system (described by an arbitrary time-dependent quadratic Hamiltonian) connected to infinite electrodes. Although mathematically equivalent to the nonequilibrium Green's function formalism, the approach is based on the scattering wave functions of the system. It amounts to solving a set of generalized Schrödinger equations that include an additional "source" term (coming from the time-dependent perturbation) and an absorbing "sink" term (the electrodes). The algorithm execution time scales linearly with both system size and simulation time, allowing one to simulate large systems (currently around 106 degrees of freedom) and/or large times (currently around 105 times the smallest time scale of the system). As an application we calculate the current-voltage characteristics of a Josephson junction for both short and long junctions, and recover the multiple Andreev reflection physics. We also discuss two intrinsically time-dependent situations: the relaxation time of a Josephson junction after a quench of the voltage bias, and the propagation of voltage pulses through a Josephson junction. In the case of a ballistic, long Josephson junction, we predict that a fast voltage pulse creates an oscillatory current whose frequency is controlled by the Thouless energy of the normal part. A similar effect is found for short junctions; a voltage pulse produces an oscillating current which, in the absence of electromagnetic environment, does not relax.

  4. Time scale algorithms for an inhomogeneous group of atomic clocks

    NASA Technical Reports Server (NTRS)

    Jacques, C.; Boulanger, J.-S.; Douglas, R. J.; Morris, D.; Cundy, S.; Lam, H. F.

    1993-01-01

    Through the past 17 years, the time scale requirements at the National Research Council (NRC) have been met by the unsteered output of its primary laboratory cesium clocks, supplemented by hydrogen masers when short-term stability better than 2 x 10(exp -12)tau(sup -1/2) has been required. NRC now operates three primary laboratory cesium clocks, three hydrogen masers, and two commercial cesium clocks. NRC has been using ensemble averages for internal purposes for the past several years, and has a realtime algorithm operating on the outputs of its high-resolution (2 x 10(exp -13) s at 1 s) phase comparators. The slow frequency drift of the hydrogen masers has presented difficulties in incorporating their short-term stability into the ensemble average, while retaining the long-term stability of the laboratory cesium frequency standards. We report on this work on algorithms for an inhomogeneous ensemble of atomic clocks, and on our initial work on time scale algorithms that could incorporate frequency calibrations at NRC from the next generation of Zacharias fountain cesium frequency standards having frequency accuracies that might surpass 10(exp -15), or from single-trapped-ion frequency standards (Ba+, Sr+,...) with even higher potential accuracies. The requirements for redundancy in all the elements (including the algorithms) of an inhomogeneous ensemble that would give a robust real-time output of the algorithms are presented and discussed.

  5. Effects of sleep inertia after daytime naps vary with executive load and time of day.

    PubMed

    Groeger, John A; Lo, June C Y; Burns, Christopher G; Dijk, Derk-Jan

    2011-04-01

    The effects of executive load on working memory performance during sleep inertia after morning or afternoon naps were assessed using a mixed design with nap/wake as a between-subjects factor and morning/afternoon condition as a within-subject factor. Thirty-two healthy adults (mean 22.5 ± 3.0 years) attended two laboratory sessions after a night of restricted sleep (6 hrs), and at first visit, were randomly assigned to the Nap or Wake group. Working memory (n-back) and subjective workload were assessed approximately 5 and 25 minutes after 90-minute morning and afternoon nap opportunities and at the corresponding times in the Wake condition. Actigraphically assessed nocturnal sleep duration, subjective sleepiness, and psychomotor vigilance performance before daytime assessments did not vary across conditions. Afternoon naps showed shorter EEG assessed sleep latencies, longer sleep duration, and more Slow Wave Sleep than morning naps. Working memory performance deteriorated, and subjective mental workload increased at higher executive loadings. After afternoon naps, participants performed less well on more executive-function intensive working memory tasks (i.e., 3-back), but waking and napping participants performed equally well on simpler tasks. After some 30 minutes of cognitive activity, there were no longer performance differences between the waking and napping groups. Subjective Task Difficulty and Mental Effort requirements were less affected by sleep inertia and dissociated from objective measures when participants had napped in the afternoon. We conclude that executive functions take longer to return to asymptotic performance after sleep than does performance of simpler tasks which are less reliant on executive functions. PMID:21463024

  6. Time-based prospective memory in young children-Exploring executive functions as a developmental mechanism.

    PubMed

    Kretschmer, Anett; Voigt, Babett; Friedrich, Sylva; Pfeiffer, Kathrin; Kliegel, Matthias

    2014-01-01

    The present study investigated time-based prospective memory (PM) during the transition from kindergarten/preschool to school age and applied mediation models to test the impact of executive functions (working memory, inhibitory control) and time monitoring on time-based PM development. Twenty-five preschool (age: M = 5.75, SD = 0.28) and 22 primary school children (age: M = 7.83, SD = 0.39) participated. To examine time-based PM, children had to play a computer-based driving game requiring them to drive a car on a road without hitting others cars (ongoing task) and to refill the car regularly according to a fuel gauge, which serves as clock equivalent (PM task). The level of gas that was still left in the fuel gauge was not displayed on the screen and children had to monitor it via a button press (time monitoring). Results revealed a developmental increase in time-based PM performance from preschool to school age. Applying the mediation models, only working memory was revealed to influence PM development. Neither inhibitory control alone nor the mediation paths leading from both executive functions to time monitoring could explain the link between age and time-based PM. Thus, results of the present study suggest that working memory may be one key cognitive process driving the developmental growth of time-based PM during the transition from preschool to school age. PMID:24111941

  7. A Quantum Algorithm for Estimating Hitting Times of Markov Chains

    NASA Astrophysics Data System (ADS)

    Narayan Chowdhury, Anirban; Somma, Rolando

    We present a quantum algorithm to estimate the hitting time of a reversible Markov chain faster than classically possible. To this end, we show that the hitting time is given by an expected value of the inverse of a Hermitian matrix. To obtain this expected value, our algorithm combines three important techniques developed in the literature. One such a technique is called spectral gap amplification and we use it to amplify the gap of the Hermitian matrix or reduce its condition number. We then use a new algorithm by Childs, Kothari, and Somma to implement the inverse of a matrix, and finally use methods developed in the context of quantum metrology to reduce the complexity of expected-value estimation for a given precision. The authors acknowledge support from AFOSR Grant Number FA9550-12-1-0057 and the Google Research Award.

  8. Explicit Time-Scale Splitting Algorithm for Stiff Problems: Auto-ignition of Gaseous Mixtures behind a Steady Shock

    NASA Astrophysics Data System (ADS)

    Valorani, Mauro; Goussis, Dimitrios A.

    2001-05-01

    A new explicit algorithm based on the computational singular perturbation (CSP) method is presented. This algorithm is specifically designed to solve stiff problems, and its performance increases with stiffness. The key concept in its structure is the splitting of the fast from the slow time scales in the problem, realized by embedding CSP concepts into an explicit scheme. In simple terms, the algorithm marches in time with only the terms producing the slow time scales, while the contribution of the terms producing the fast time scales is taken into account at the end of each integration step as a correction. The new algorithm is designed for the integration of stiff systems of PDEs by means of explicit schemes. For simplicity in the presentation and discussion of the different features of the new algorithm, a simple test case is considered, involving the auto-ignition of a methane/air mixture behind a normal shock wave, which is described by a system of ODEs. The performance of the new algorithm (accuracy and computational efficiency) is then compared with the well-known LSODE package. Its merits when used for the solution of systems of PDEs are discussed. Although when dealing with a stiff system of ODEs the new algorithm is shown to provide equal accuracy with that delivered by LSODE at the cost of higher execution time, the results indicate that its performance could be superior when facing a stiff system of PDEs.

  9. Lidar detection algorithm for time and range anomalies

    NASA Astrophysics Data System (ADS)

    Ben-David, Avishai; Davidson, Charles E.; Vanderbeek, Richard G.

    2007-10-01

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t1 to t2" is addressed, and for range anomaly where the question "is a target present at time t within ranges R1 and R2" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO2 lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  10. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed. PMID:17932542

  11. A Real-Time Rover Executive based On Model-Based Reactive Planning

    NASA Technical Reports Server (NTRS)

    Bias, M. Bernardine; Lemai, Solange; Muscettola, Nicola; Korsmeyer, David (Technical Monitor)

    2003-01-01

    This paper reports on the experimental verification of the ability of IDEA (Intelligent Distributed Execution Architecture) effectively operate at multiple levels of abstraction in an autonomous control system. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting control agents, each organized around the same fundamental structure. Two IDEA agents, a system-level agent and a mission-level agent, are designed and implemented to autonomously control the K9 rover in real-time. The system is evaluated in the scenario where the rover must acquire images from a specified set of locations. The IDEA agents are responsible for enabling the rover to achieve its goals while monitoring the execution and safety of the rover and recovering from dangerous states when necessary. Experiments carried out both in simulation and on the physical rover, produced highly promising results.

  12. Timescape: a simple space-time interpolation geostatistical Algorithm

    NASA Astrophysics Data System (ADS)

    Ciolfi, Marco; Chiocchini, Francesca; Gravichkova, Olga; Pisanelli, Andrea; Portarena, Silvia; Scartazza, Andrea; Brugnoli, Enrico; Lauteri, Marco

    2016-04-01

    Environmental sciences include both time and space variability in their datasets. Some established tools exist for both spatial interpolation and time series analysis alone, but mixing space and time variability calls for compromise: Researchers are often forced to choose which is the main source of variation, neglecting the other. We propose a simple algorithm, which can be used in many fields of Earth and environmental sciences when both time and space variability must be considered on equal grounds. The algorithm has already been implemented in Java language and the software is currently available at https://sourceforge.net/projects/timescapeglobal/ (it is published under GNU-GPL v3.0 Free Software License). The published version of the software, Timescape Global, is focused on continent- to Earth-wide spatial domains, using global longitude-latitude coordinates for samples localization. The companion Timescape Local software is currently under development ad will be published with an open license as well; it will use projected coordinates for a local to regional space scale. The basic idea of the Timescape Algorithm consists in converting time into a sort of third spatial dimension, with the addition of some causal constraints, which drive the interpolation including or excluding observations according to some user-defined rules. The algorithm is applicable, as a matter of principle, to anything that can be represented with a continuous variable (a scalar field, technically speaking). The input dataset should contain position, time and observed value of all samples. Ancillary data can be included in the interpolation as well. After the time-space conversion, Timescape follows basically the old-fashioned IDW (Inverse Distance Weighted) interpolation Algorithm, although users have a wide choice of customization options that, at least partially, overcome some of the known issues of IDW. The three-dimensional model produced by the Timescape Algorithm can be

  13. Efficient photoheating algorithms in time-dependent photoionization simulations

    NASA Astrophysics Data System (ADS)

    Lee, Kai-Yan; Mellema, Garrelt; Lundqvist, Peter

    2016-02-01

    We present an extension to the time-dependent photoionization code C2-RAY to calculate photoheating in an efficient and accurate way. In C2-RAY, the thermal calculation demands relatively small time-steps for accurate results. We describe two novel methods to reduce the computational cost associated with small time-steps, namely, an adaptive time-step algorithm and an asynchronous evolution approach. The adaptive time-step algorithm determines an optimal time-step for the next computational step. It uses a fast ray-tracing scheme to quickly locate the relevant cells for this determination and only use these cells for the calculation of the time-step. Asynchronous evolution allows different cells to evolve with different time-steps. The asynchronized clocks of the cells are synchronized at the times where outputs are produced. By only evolving cells which may require short time-steps with these short time-steps instead of imposing them to the whole grid, the computational cost of the calculation can be substantially reduced. We show that our methods work well for several cosmologically relevant test problems and validate our results by comparing to the results of another time-dependent photoionization code.

  14. Reducing the time requirement of k-means algorithm.

    PubMed

    Osamor, Victor Chukwudi; Adebiyi, Ezekiel Femi; Oyelade, Jelilli Olarenwaju; Doumbia, Seydou

    2012-01-01

    Traditional k-means and most k-means variants are still computationally expensive for large datasets, such as microarray data, which have large datasets with large dimension size d. In k-means clustering, we are given a set of n data points in d-dimensional space R(d) and an integer k. The problem is to determine a set of k points in R(d), called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this work, we develop a novel k-means algorithm, which is simple but more efficient than the traditional k-means and the recent enhanced k-means. Our new algorithm is based on the recently established relationship between principal component analysis and the k-means clustering. We provided the correctness proof for this algorithm. Results obtained from testing the algorithm on three biological data and six non-biological data (three of these data are real, while the other three are simulated) also indicate that our algorithm is empirically faster than other known k-means algorithms. We assessed the quality of our algorithm clusters against the clusters of a known structure using the Hubert-Arabie Adjusted Rand index (ARI(HA)). We found that when k is close to d, the quality is good (ARI(HA)>0.8) and when k is not close to d, the quality of our new k-means algorithm is excellent (ARI(HA)>0.9). In this paper, emphases are on the reduction of the time requirement of the k-means algorithm and its application to microarray data due to the desire to create a tool for clustering and malaria research. However, the new clustering algorithm can be used for other clustering needs as long as an appropriate measure of distance between the centroids and the members is used. This has been demonstrated in this work on six non-biological data. PMID:23239974

  15. Fully efficient time-parallelized quantum optimal control algorithm

    NASA Astrophysics Data System (ADS)

    Riahi, M. K.; Salomon, J.; Glaser, S. J.; Sugny, D.

    2016-04-01

    We present a time-parallelization method that enables one to accelerate the computation of quantum optimal control algorithms. We show that this approach is approximately fully efficient when based on a gradient method as optimization solver: the computational time is approximately divided by the number of available processors. The control of spin systems, molecular orientation, and Bose-Einstein condensates are used as illustrative examples to highlight the wide range of applications of this numerical scheme.

  16. An exponential time 2-approximation algorithm for bandwidth

    SciTech Connect

    Kasiviswanathan, Shiva; Furer, Martin; Gaspers, Serge

    2009-01-01

    The bandwidth of a graph G on n vertices is the minimum b such that the vertices of G can be labeled from 1 to n such that the labels of every pair of adjacent vertices differ by at most b. In this paper, we present a 2-approximation algorithm for the Bandwidth problem that takes worst-case {Omicron}(1.9797{sup n}) = {Omicron}(3{sup 0.6217n}) time and uses polynomial space. This improves both the previous best 2- and 3-approximation algorithms of Cygan et al. which have an {Omicron}*(3{sup n}) and {Omicron}*(2{sup n}) worst-case time bounds, respectively. Our algorithm is based on constructing bucket decompositions of the input graph. A bucket decomposition partitions the vertex set of a graph into ordered sets (called buckets) of (almost) equal sizes such that all edges are either incident on vertices in the same bucket or on vertices in two consecutive buckets. The idea is to find the smallest bucket size for which there exists a bucket decomposition. The algorithm uses a simple divide-and-conquer strategy along with dynamic programming to achieve this improved time bound.

  17. A post-processing algorithm for time domain pitch trackers

    NASA Astrophysics Data System (ADS)

    Specker, P.

    1983-01-01

    This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.

  18. Time-series pattern recognition with an immune algorithm

    NASA Astrophysics Data System (ADS)

    Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.

    2015-11-01

    In this paper, changes in sequences pattern describing damage-sensitive features of an object which undergoes a failure mode are recognized using an immune algorithm. A frequency response change is an effect for various failure modes occurrence. The objective of this paper is to present immune algorithm for pattern recognition which can discover dependencies between failure mode and effect - frequency response change. Changes in the effect are described with noise due to the fact that the object operates in external conditions. In the immune algorithm antibodies encode various changes in the effect after a given mode occurrence by a number of time. A pathogen encodes a noisy effect of the mode occurrence. Antibodies belonging to a given neighbourhood represent effects after a given type of failure mode occurrence. Antibodies from the neighbourhood undergo clonal selection and affinity maturation process. With the best matched antibody the type of failure mode is achieved.

  19. An algorithm for the Italian atomic time scale

    NASA Technical Reports Server (NTRS)

    Cordara, F.; Vizio, G.; Tavella, P.; Pettiti, V.

    1994-01-01

    During the past twenty years, the time scale at the IEN has been realized by a commercial cesium clock, selected from an ensemble of five, whose rate has been continuously steered towards UTC to maintain a long term agreement within 3 x 10(exp -13). A time scale algorithm, suitable for a small clock ensemble and capable of improving the medium and long term stability of the IEN time scale, has been recently designed taking care of reducing the effects of the seasonal variations and the sudden frequency anomalies of the single cesium clocks. The new time scale, TA(IEN), is obtained as a weighted average of the clock ensemble computed once a day from the time comparisons between the local reference UTC(IEN) and the single clocks. It is foreseen to include in the computation also ten cesium clocks maintained in other Italian laboratories to further improve its reliability and its long term stability. To implement this algorithm, a personal computer program in Quick Basic has been prepared and it has been tested at the IEN time and frequency laboratory. Results obtained using this algorithm on the real clocks data relative to a period of about two years are presented.

  20. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  1. A software architecture for hard real-time execution of automatically synthesized plans or control laws

    NASA Technical Reports Server (NTRS)

    Schoppers, Marcel

    1994-01-01

    The design of a flexible, real-time software architecture for trajectory planning and automatic control of redundant manipulators is described. Emphasis is placed on a technique of designing control systems that are both flexible and robust yet have good real-time performance. The solution presented involves an artificial intelligence algorithm that dynamically reprograms the real-time control system while planning system behavior.

  2. Color reproduction and processing algorithm based on real-time mapping for endoscopic images.

    PubMed

    Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A

    2016-01-01

    In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works. PMID:26759756

  3. Architecture and data processing alternatives for the TSE computer. Volume 3: Execution of a parallel counting algorithm using array logic (Tse) devices

    NASA Technical Reports Server (NTRS)

    Metcalfe, A. G.; Bodenheimer, R. E.

    1976-01-01

    A parallel algorithm for counting the number of logic-l elements in a binary array or image developed during preliminary investigation of the Tse concept is described. The counting algorithm is implemented using a basic combinational structure. Modifications which improve the efficiency of the basic structure are also presented. A programmable Tse computer structure is proposed, along with a hardware control unit, Tse instruction set, and software program for execution of the counting algorithm. Finally, a comparison is made between the different structures in terms of their more important characteristics.

  4. Pseudo-time algorithms for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, E.

    1986-01-01

    A pseudo-time method is introduced to integrate the compressible Navier-Stokes equations to a steady state. This method is a generalization of a method used by Crocco and also by Allen and Cheng. We show that for a simple heat equation that this is just a renormalization of the time. For a convection-diffusion equation the renormalization is dependent only on the viscous terms. We implement the method for the Navier-Stokes equations using a Runge-Kutta type algorithm. This permits the time step to be chosen based on the inviscid model only. We also discuss the use of residual smoothing when viscous terms are present.

  5. New approximation algorithms for flow shop total completion time problem

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Ren, Tao

    2013-09-01

    This article addresses the flow shop scheduling problem to minimize the sum of the completion times. On the basis of the properties in job sequencing, the triangular shortest processing time (TSPT) first and dynamic triangular shortest processing time first heuristics are designed to solve the static and dynamic versions of this problem, respectively. Moreover, an improvement scheme is provided for these heuristics to enhance the quality of the original solutions. For further numerical evaluation of the heuristics, two new lower bounds with performance guarantees are presented for the two versions of the problem. At the end of the article, a series of numerical experiments is conducted to demonstrate the effectiveness of the algorithms.

  6. Integrating impairments in reaction time and executive function using a diffusion model framework.

    PubMed

    Karalunas, Sarah L; Huang-Pollock, Cynthia L

    2013-07-01

    Using Ratcliff's diffusion model and ex-Gaussian decomposition, we directly evaluate the role individual differences in reaction time (RT) distribution components play in the prediction of inhibitory control and working memory (WM) capacity in children with and without ADHD. Children with (n = 91, [Formula: see text] age = 10.2 years, 67 % male) and without ADHD (n = 62, [Formula: see text] age = 10.6 years, 46 % male) completed four tasks of WM and a stop signal reaction time (SSRT) task. Children with ADHD had smaller WM capacities and less efficient inhibitory control. Diffusion model analyses revealed that children with ADHD had slower drift rates (v) and faster non-decision times (Ter), but there were no group differences in boundary separations (a). Similarly, using an ex-Gaussian approach, children with ADHD had larger τ values than non-ADHD controls, but did not differ in μ or σ distribution components. Drift rate mediated the association between ADHD status and performance on both inhibitory control and WM capacity. τ also mediated the ADHD-executive function impairment associations; however, models were a poorer fit to the data. Impaired performance on RT and executive functioning tasks has long been associated with childhood ADHD. Both are believed to be important cognitive mechanisms to the disorder. We demonstrate here that drift rate, or the speed at which information accumulates towards a decision, is able to explain both. PMID:23334775

  7. Integrating impairments in reaction time and executive function using a diffusion model framework

    PubMed Central

    Karalunas, Sarah L.; Huang-Pollock, Cynthia L.

    2013-01-01

    Using Ratcliff’s diffusion model and ex-Gaussian decomposition, we directly evaluate the role individual differences in reaction time (RT) distribution components play in the prediction of inhibitory control and working memory (WM) capacity in children with and without ADHD. Children with (n=92, x̄ age= 10.2 years, 67% male) and without ADHD (n=62, x̄ age=10.6 years, 46% male) completed four tasks of WM and a stop signal reaction time (SSRT) task. Children with ADHD had smaller WM capacities and less efficient inhibitory control. Diffusion model analyses revealed that children with ADHD had slower drift rates (v) and faster non-decision times (Ter), but there were no group differences in boundary separations (a). Similarly, using an ex-Gaussian approach, children with ADHD had larger τ values than non-ADHD controls, but did not differ in µ or σ distribution components. Drift rate mediated the association between ADHD status and performance on both inhibitory control and WM capacity. τ also mediated the ADHD-executive function impairment associations; however, models were a poorer fit to the data. Impaired performance on RT and executive functioning tasks has long been associated with childhood ADHD. Both are believed to be important cognitive mechanisms to the disorder. We demonstrate here that drift rate, or the speed at which information accumulates towards a decision, is able to explain both. PMID:23334775

  8. Real-time algorithm for robust coincidence search

    SciTech Connect

    Petrovic, T.; Vencelj, M.; Lipoglavsek, M.; Gajevic, J.; Pelicon, P.

    2012-10-20

    In in-beam {gamma}-ray spectroscopy experiments, we often look for coincident detection events. Among every N events detected, coincidence search is naively of principal complexity O(N{sup 2}). When we limit the approximate width of the coincidence search window, the complexity can be reduced to O(N), permitting the implementation of the algorithm into real-time measurements, carried out indefinitely. We have built an algorithm to find simultaneous events between two detection channels. The algorithm was tested in an experiment where coincidences between X and {gamma} rays detected in two HPGe detectors were observed in the decay of {sup 61}Cu. Functioning of the algorithm was validated by comparing calculated experimental branching ratio for EC decay and theoretical calculation for 3 selected {gamma}-ray energies for {sup 61}Cu decay. Our research opened a question on the validity of the adopted value of total angular momentum of the 656 keV state (J{sup {pi}} = 1/2{sup -}) in {sup 61}Ni.

  9. A combined algorithm for minimum time slewing of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Bainum, P. M.; Li, F.; Xu, J.

    1994-01-01

    The use of Pontryagin's Maximum Principle for the large-angle slewing of large flexible structures usually results in the so-called two-point boundary-value problem (TPBVP), in which many requirements (e.g., minimum time, small flexible amplitude, and limited control powers, etc.) must be satisfied simultaneously. The successful solution of this problem depends largely on the use of an efficient numerical computational algorithm. There are many candidate algorithms available for his problem (e.g., quasilinearization, gradient, and shooting, etc.). In this paper, a proposed algorithm, which combines the quasilinearization method with a time shortening technique and a shooting method, is applied to the minimum-time, three-dimensional, and large-angle maneuver of flexible spacecraft, particularly the orbiting Spacecraft Control Laboratory Experiment (SCOLE) configuration. Theoretically, the nonlinear TPBVP can be solved only through the shooting method to find the 'exact' switching times for the bang-bang controls. However, computationally, a suitable guess for the missing initial costates is crucial because the convergence range of the unknown initial costates is usually narrow, especially for systems with high dimensions and when a multi-bang-bang control strategy is needed. On the other hand, the problems of near minimum time attitude maneuver of general rigid spacecraft and fast slewing of flexible spacecraft have been examined by the authors through a numerical approach based on the quasilinearization algorithm with a time shortening technique. Computational results have demonstrated its broad convergence range and insensitivity to initial costate choices. Consequently, a combined approach is naturally suggested here to solve the minimum time slewing problem. That is, in the computational process, the quasilinearization method is used first to obtain a near minimum time solution. Then, the acquired converged initial costates from the quasilinearization approach

  10. Identifying time measurement tampering in the traversal time and hop count analysis (TTHCA) wormhole detection algorithm.

    PubMed

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2013-01-01

    Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143

  11. Identifying Time Measurement Tampering in the Traversal Time and Hop Count Analysis (TTHCA) Wormhole Detection Algorithm

    PubMed Central

    Karlsson, Jonny; Dooley, Laurence S.; Pulkkis, Göran

    2013-01-01

    Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ΔT Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ΔT Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143

  12. Parallel machine scheduling with step-deteriorating jobs and setup times by a hybrid discrete cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Peng; Cheng, Wenming; Wang, Yi

    2015-11-01

    This article considers the parallel machine scheduling problem with step-deteriorating jobs and sequence-dependent setup times. The objective is to minimize the total tardiness by determining the allocation and sequence of jobs on identical parallel machines. In this problem, the processing time of each job is a step function dependent upon its starting time. An individual extended time is penalized when the starting time of a job is later than a specific deterioration date. The possibility of deterioration of a job makes the parallel machine scheduling problem more challenging than ordinary ones. A mixed integer programming model for the optimal solution is derived. Due to its NP-hard nature, a hybrid discrete cuckoo search algorithm is proposed to solve this problem. In order to generate a good initial swarm, a modified Biskup-Hermann-Gupta (BHG) heuristic called MBHG is incorporated into the population initialization. Several discrete operators are proposed in the random walk of Lévy flights and the crossover search. Moreover, a local search procedure based on variable neighbourhood descent is integrated into the algorithm as a hybrid strategy in order to improve the quality of elite solutions. Computational experiments are executed on two sets of randomly generated test instances. The results show that the proposed hybrid algorithm can yield better solutions in comparison with the commercial solver CPLEX® with a one hour time limit, the discrete cuckoo search algorithm and the existing variable neighbourhood search algorithm.

  13. Time for a Change: The Promise of Extended Time Schools for Promoting Student Achievement. Executive Summary

    ERIC Educational Resources Information Center

    Farbman, David; Kaplan, Claire

    2005-01-01

    Massachusetts 2020 is a nonprofit operating foundation with a mission to expand educational and economic opportunities for children and families across Massachusetts. Massachusetts 2020, with support from the L.G. Balfour Foundation, a Bank of America Company, set out to understand how a select group of extended-time schools in Massachusetts and…

  14. Distributed execution of recovery blocks - An approach for uniform treatment of hardware and software faults in real-time applications

    NASA Technical Reports Server (NTRS)

    Kim, K. H.; Welch, Howard O.

    1989-01-01

    The concept of distributed execution of recovery blocks is examined as an approach for uniform treatment of hardware and software faults. A useful characteristic of the approach is the relatively small time cost it requires. The approach is thus suitable for incorporation into real-time computer systems. A specific formulation of the approach that is aimed at minimizing the recovery time is presented, called the distributed recovery block (DRB) scheme. The DRB scheme is capable of effecting forward recovery while handling both hardware and software faults in a uniform manner. An approach to incorporating the capability for multiprocessing scheme is also discussed. Two experiments aimed at testing the execution efficiency of the scheme in real-time applications have been conducted on two different multimicrocomputer networks. The results clearly indicate the feasibility of achieving tolerance of hardware and software faults in a broad range of real-time computer systems by use of the schemes for distributed execution of recovery blocks.

  15. Echoed time series predictions, neural networks and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Conway, A.

    This work aims to illustrate a potentially serious and previously unrecognised problem in using Neural Networks (NNs), and possibly other techniques, to predict Time Series (TS). It also demonstrates how a new training scheme using a genetic algorithm can alleviate this problem. Although it is already established that NNs can predict TS such as Sunspot Number (SSN) with reasonable success, the accuracy of these predictions is often judged solely by an RMS or related error. The use of this type of error overlooks the presence of what we have termed echoing, where the NN outputs its most recent input as its prediction. Therefore, a method of detecting echoed predictions is introduced, called time-shifting. Reasons for the presence of echo are discussed and then related to the choice of TS sampling. Finally, a new specially designed training scheme is described, which is a hybrid of a genetic algorithm search and back propagation. With this method we have successfully trained NNs to predict without any echo.

  16. Two algorithms to fill cloud gaps in LST time series

    NASA Astrophysics Data System (ADS)

    Frey, Corinne; Kuenzer, Claudia

    2013-04-01

    Cloud contamination is a challenge for optical remote sensing. This is especially true for the recording of a fast changing radiative quantity like land surface temperature (LST). The substitution of cloud contaminated pixels with estimated values - gap filling - is not straightforward but possible to a certain extent, as this research shows for medium-resolution time series of MODIS data. Area of interest is the Upper Mekong Delta (UMD). The background for this work is an analysis of the temporal development of 1-km LST in the context of the WISDOM project. The climate of the UMD is characterized by peak rainfalls in the summer months, which is also the time where cloud contamination is highest in the area. Average number of available daytime observations per pixel can go down to less than five for example in the month of June. In winter the average number may reach 25 observations a month. This situation is not appropriate to the calculation of longterm statistics; an adequate gap filling method should be used beforehand. In this research, two different algorithms were tested on an 11 year time series: 1) a gradient based algorithm and 2) a method based on ECMWF era interim re-analysis data. The first algorithm searches for stable inter-image gradients from a given environment and for a certain period of time. These gradients are then used to estimate LST for cloud contaminated pixels in each acquisition. The estimated LSTs are clear-sky LSTs and solely based on the MODIS LST time series. The second method estimates LST on the base of adapted ECMWF era interim skin temperatures and creates a set of expected LSTs. The estimated values were used to fill the gaps in the original dataset, creating two new daily, 1 km datasets. The maps filled with the gradient based method had more than the double amount of valid pixels than the original dataset. The second method (ECMWF era interim based) was able to fill all data gaps. From the gap filled data sets then monthly

  17. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; Nowak, M. A.

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  18. An Algorithm for Network Real Time Kinematic Processing

    NASA Astrophysics Data System (ADS)

    Malekzadeh, A.; Asgari, J.; Amiri-Simkooei, A. R.

    2015-12-01

    NRTK1 is an efficient method to achieve precise real time positioning from GNSS measurements. In this paper we attempt to improve NRTK algorithm by introducing a new strategy. In this strategy a precise relocation of master station observations is performed using Sagnac effect. After processing the double differences, the tropospheric and ionospheric errors of each baseline can be estimated separately. The next step is interpolation of these errors for the atmospheric errors mitigation of desired baseline. Linear and kriging interpolation methods are implemented in this study. In the new strategy the RINEX2 data of the master station is re-located and is converted to the desired virtual observations. Then the interpolated corrections are applied to the virtual observations. The results are compared by the classical method of VRS generation. 1 Network Real Time Kinematic 2 Receiver Independent Exchange Format

  19. A multilevel Cartesian non-uniform grid time domain algorithm

    SciTech Connect

    Meng Jun; Boag, Amir; Lomakin, Vitaliy; Michielssen, Eric

    2010-11-01

    A multilevel Cartesian non-uniform grid time domain algorithm (CNGTDA) is introduced to rapidly compute transient wave fields radiated by time dependent three-dimensional source constellations. CNGTDA leverages the observation that transient wave fields generated by temporally bandlimited and spatially confined source constellations can be recovered via interpolation from appropriately delay- and amplitude-compensated field samples. This property is used in conjunction with a multilevel scheme, in which the computational domain is hierarchically decomposed into subdomains with sparse non-uniform grids used to obtain the fields. For both surface and volumetric source distributions, the computational cost of CNGTDA to compute the transient field at N{sub s} observation locations from N{sub s} collocated sources for N{sub t} discrete time instances scales as O(N{sub t}N{sub s}logN{sub s}) and O(N{sub t}N{sub s}log{sup 2}N{sub s}) in the low- and high-frequency regimes, respectively. Coupled with marching-on-in-time (MOT) time domain integral equations, CNGTDA can facilitate efficient analysis of large scale time domain electromagnetic and acoustic problems.

  20. Role of sleep continuity and total sleep time in executive function across the adult lifespan.

    PubMed

    Wilckens, Kristine A; Woo, Sarah G; Kirk, Afton R; Erickson, Kirk I; Wheeler, Mark E

    2014-09-01

    The importance of sleep for cognition in young adults is well established, but the role of habitual sleep behavior in cognition across the adult life span remains unknown. We examined the relationship between sleep continuity and total sleep time as assessed with a sleep-detection device, and cognitive performance using a battery of tasks in young (n = 59, mean age = 23.05) and older (n = 53, mean age = 62.68) adults. Across age groups, higher sleep continuity was associated with better cognitive performance. In the younger group, higher sleep continuity was associated with better working memory and inhibitory control. In the older group, higher sleep continuity was associated with better inhibitory control, memory recall, and verbal fluency. Very short and very long total sleep time was associated with poorer working memory and verbal fluency, specifically in the younger group. Total sleep time was not associated with cognitive performance in any domains for the older group. These findings reveal that sleep continuity is important for executive function in both young and older adults, but total sleep time may be more important for cognition in young adults. PMID:25244484

  1. The role of sleep continuity and total sleep time in executive function across the adult lifespan

    PubMed Central

    Wilckens, Kristine A.; Woo, Sarah G.; Kirk, Afton R.; Erickson, Kirk I.; Wheeler, Mark E.

    2015-01-01

    The importance of sleep for cognition in young adults is well established, but the role of habitual sleep behavior in cognition across the adult lifespan remains unknown. We examined the relationship between sleep continuity and total sleep time assessed with a sleep detection device and cognitive performance using a battery of tasks in young (n = 59, mean age = 23.05) and older (n = 53, mean age = 62.68) adults. Across age groups, higher sleep continuity was associated with better cognitive performance. In the younger group, higher sleep continuity was associated with better working memory and inhibitory control. In the older group, higher sleep continuity was associated with better inhibitory control, memory recall, and verbal fluency. Very short and very long total sleep time was associated with poorer working memory and verbal fluency, specifically in the younger group. Total sleep time was not associated with cognitive performance in any domains for the older group. These findings reveal that sleep continuity is important for executive function in both young and older adults, but total sleep time may be more important for cognition in young adults. PMID:25244484

  2. Parallel machine scheduling problem with ready times, due times and sequence-dependent setup times using meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Joo, Cheol Min; Kim, Byung Soo

    2012-09-01

    This article considers a parallel machine scheduling problem with ready times, due times and sequence-dependent setup times. The objective of this problem is to determine the allocation policy of jobs and the scheduling policy of machines to minimize the weighted sum of setup times, delay times and tardy times. A mathematical model for optimal solution is derived. An in-depth analysis of the model shows that it is very complicated and difficult to obtain optimal solutions as the problem size becomes large. Therefore, two meta-heuristics, genetic algorithm (GA) and a new population-based evolutionary meta-heuristic called self-evolution algorithm (SEA), are proposed. The performances of the meta-heuristic algorithms are evaluated through comparison with optimal solutions using several randomly generated examples.

  3. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    PubMed Central

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  4. Chaos time series prediction based on membrane optimization algorithms.

    PubMed

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng; Peng, Hong

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  5. A time-efficient algorithm for implementing the Catmull-Clark subdivision method

    NASA Astrophysics Data System (ADS)

    Ioannou, G.; Savva, A.; Stylianou, V.

    2015-10-01

    Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.

  6. How task complexity and stimulus modality affect motor execution: target accuracy, response timing and hesitations.

    PubMed

    Parrington, Lucy; MacMahon, Clare; Ball, Kevin

    2015-01-01

    Elite sports players are characterized by the ability to produce successful outcomes while attending to changing environmental conditions. Few studies have assessed whether the perceptual environment affects motor skill execution. To test the effect of changing task complexity and stimulus conditions, the authors examined response times and target accuracy of 12 elite Australian football players using a passing-based laboratory test. Data were assessed using mixed modeling and chi-square analyses. No differences were found in target accuracy for changes in complexity or stimulus condition. Decision, movement and total disposal time increased with complexity and decision hesitations were greater when distractions were present. Decision, movement and disposal time were faster for auditory in comparison to visual signals, and when free to choose, players passed more frequently to auditory rather than visual targets. These results provide perspective on how basic motor control processes such as reaction and response to stimuli are influenced in a complex motor skill. Findings suggest auditory stimuli should be included in decision-making studies and may be an important part of a decision-training environment. PMID:25584721

  7. Overlay improvements using a real time machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Schmitt-Weaver, Emil; Kubis, Michael; Henke, Wolfgang; Slotboom, Daan; Hoogenboom, Tom; Mulkens, Jan; Coogans, Martyn; ten Berge, Peter; Verkleij, Dick; van de Mast, Frank

    2014-04-01

    While semiconductor manufacturing is moving towards the 14nm node using immersion lithography, the overlay requirements are tightened to below 5nm. Next to improvements in the immersion scanner platform, enhancements in the overlay optimization and process control are needed to enable these low overlay numbers. Whereas conventional overlay control methods address wafer and lot variation autonomously with wafer pre exposure alignment metrology and post exposure overlay metrology, we see a need to reduce these variations by correlating more of the TWINSCAN system's sensor data directly to the post exposure YieldStar metrology in time. In this paper we will present the results of a study on applying a real time control algorithm based on machine learning technology. Machine learning methods use context and TWINSCAN system sensor data paired with post exposure YieldStar metrology to recognize generic behavior and train the control system to anticipate on this generic behavior. Specific for this study, the data concerns immersion scanner context, sensor data and on-wafer measured overlay data. By making the link between the scanner data and the wafer data we are able to establish a real time relationship. The result is an inline controller that accounts for small changes in scanner hardware performance in time while picking up subtle lot to lot and wafer to wafer deviations introduced by wafer processing.

  8. Measuring executive function in control subjects and TBI patients with question completion time (QCT)

    PubMed Central

    Woods, David L.; Yund, E. William; Wyma, John M.; Ruff, Ron; Herron, Timothy J.

    2015-01-01

    Questionnaire completion is a complex task that places demands on cognitive functions subserving reading, introspective memory, decision-making, and motor control. Although computerized questionnaires and surveys are used with increasing frequency in clinical practice, few studies have examined question completion time (QCT), the time required to complete each question. Here, we analyzed QCTs in 172 control subjects and 31 patients with traumatic brain injury (TBI) who completed two computerized questionnaires, the 17-question Post-Traumatic Stress Disorder (PTSD) Checklist (PCL) and the 25-question Cognitive Failures Questionnaire (CFQ). In control subjects, robust correlations were found between self-paced QCTs on the PCL and CFQ (r = 0.82). QCTs on individual questions correlated strongly with the number of words in the question, indicating the critical role of reading speed. QCTs increased significantly with age, and were reduced in females and in subjects with increased education and computer experience. QCT z-scores, corrected for age, education, computer use, and sex, correlated more strongly with each other than with the results of other cognitive tests. Patients with a history of severe TBI showed significantly delayed QCTs, but QCTs fell within the normal range in patients with a history of mild TBI. When questionnaires are used to gather relevant patient information, simultaneous QCT measures provide reliable and clinically sensitive measures of processing speed and executive function. PMID:26042021

  9. A novel time-domain signal processing algorithm for real time ventricular fibrillation detection

    NASA Astrophysics Data System (ADS)

    Monte, G. E.; Scarone, N. C.; Liscovsky, P. O.; Rotter S/N, P.

    2011-12-01

    This paper presents an application of a novel algorithm for real time detection of ECG pathologies, especially ventricular fibrillation. It is based on segmentation and labeling process of an oversampled signal. After this treatment, analyzing sequence of segments, global signal behaviours are obtained in the same way like a human being does. The entire process can be seen as a morphological filtering after a smart data sampling. The algorithm does not require any ECG digital signal pre-processing, and the computational cost is low, so it can be embedded into the sensors for wearable and permanent applications. The proposed algorithms could be the input signal description to expert systems or to artificial intelligence software in order to detect other pathologies.

  10. Optimising query execution time in LHCb Bookkeeping System using partition pruning and Partition-Wise joins

    NASA Astrophysics Data System (ADS)

    Mathe, Zoltan; Charpentier, Philippe

    2014-06-01

    The LHCb experiment produces a huge amount of data which has associated metadata such as run number, data taking condition (detector status when the data was taken), simulation condition, etc. The data are stored in files, replicated on the Computing Grid around the world. The LHCb Bookkeeping System provides methods for retrieving datasets based on their metadata. The metadata is stored in a hybrid database model, which is a mixture of Relational and Hierarchical database models and is based on the Oracle Relational Database Management System (RDBMS). The database access has to be reliable and fast. In order to achieve a high timing performance, the tables are partitioned and the queries are executed in parallel. When we store large amounts of data the partition pruning is essential for database performance, because it reduces the amount of data retrieved from the disk and optimises the resource utilisation. This research presented here is focusing on the extended composite partitioning strategy such as range-hash partition, partition pruning and usage of the Partition-Wise joins. The system has to serve thousands of queries per minute, the performance and capability of the system is measured when the above performance optimization techniques are used.

  11. Cable Damage Detection System and Algorithms Using Time Domain Reflectometry

    SciTech Connect

    Clark, G A; Robbins, C L; Wade, K A; Souza, P R

    2009-03-24

    This report describes the hardware system and the set of algorithms we have developed for detecting damage in cables for the Advanced Development and Process Technologies (ADAPT) Program. This program is part of the W80 Life Extension Program (LEP). The system could be generalized for application to other systems in the future. Critical cables can undergo various types of damage (e.g. short circuits, open circuits, punctures, compression) that manifest as changes in the dielectric/impedance properties of the cables. For our specific problem, only one end of the cable is accessible, and no exemplars of actual damage are available. This work addresses the detection of dielectric/impedance anomalies in transient time domain reflectometry (TDR) measurements on the cables. The approach is to interrogate the cable using time domain reflectometry (TDR) techniques, in which a known pulse is inserted into the cable, and reflections from the cable are measured. The key operating principle is that any important cable damage will manifest itself as an electrical impedance discontinuity that can be measured in the TDR response signal. Machine learning classification algorithms are effectively eliminated from consideration, because only a small number of cables is available for testing; so a sufficient sample size is not attainable. Nonetheless, a key requirement is to achieve very high probability of detection and very low probability of false alarm. The approach is to compare TDR signals from possibly damaged cables to signals or an empirical model derived from reference cables that are known to be undamaged. This requires that the TDR signals are reasonably repeatable from test to test on the same cable, and from cable to cable. Empirical studies show that the repeatability issue is the 'long pole in the tent' for damage detection, because it is has been difficult to achieve reasonable repeatability. This one factor dominated the project. The two-step model-based approach is

  12. A Dynamic Era-Based Time-Symmetric Block Time-Step Algorithm with Parallel Implementations

    NASA Astrophysics Data System (ADS)

    Kaplan, Murat; Saygin, Hasan

    2012-06-01

    The time-symmetric block time-step (TSBTS) algorithm is a newly developed efficient scheme for N-body integrations. It is constructed on an era-based iteration. In this work, we re-designed the TSBTS integration scheme with a dynamically changing era size. A number of numerical tests were performed to show the importance of choosing the size of the era, especially for long-time integrations. Our second aim was to show that the TSBTS scheme is as suitable as previously known schemes for developing parallel N-body codes. In this work, we relied on a parallel scheme using the copy algorithm for the time-symmetric scheme. We implemented a hybrid of data and task parallelization for force calculation to handle load balancing problems that can appear in practice. Using the Plummer model initial conditions for different numbers of particles, we obtained the expected efficiency and speedup for a small number of particles. Although parallelization of the direct N-body codes is negatively affected by the communication/calculation ratios, we obtained good load-balanced results. Moreover, we were able to conserve the advantages of the algorithm (e.g., energy conservation for long-term simulations).

  13. Effects of Age, Intelligence and Executive Control Function on Saccadic Reaction Time in Persons with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Haishi, Koichi; Okuzumi, Hideyuki; Kokubun, Mitsuru

    2011-01-01

    The current research aimed to clarify the influence of age, intelligence and executive control function on the central tendency and intraindividual variability of saccadic reaction time in persons with intellectual disabilities. Participants were 44 persons with intellectual disabilities aged between 13 and 57 years whose IQs were between 14 and…

  14. Solving the time dependent vehicle routing problem by metaheuristic algorithms

    NASA Astrophysics Data System (ADS)

    Johar, Farhana; Potts, Chris; Bennell, Julia

    2015-02-01

    The problem we consider in this study is Time Dependent Vehicle Routing Problem (TDVRP) which has been categorized as non-classical VRP. It is motivated by the fact that multinational companies are currently not only manufacturing the demanded products but also distributing them to the customer location. This implies an efficient synchronization of production and distribution activities. Hence, this study will look into the routing of vehicles which departs from the depot at varies time due to the variation in manufacturing process. We consider a single production line where demanded products are being process one at a time once orders have been received from the customers. It is assumed that order released from the production line will be loaded into scheduled vehicle which ready to be delivered. However, the delivery could only be done once all orders scheduled in the vehicle have been released from the production line. Therefore, there could be lateness on the delivery process from awaiting all customers' order of the route to be released. Our objective is to determine a schedule for vehicle routing that minimizes the solution cost including the travelling and tardiness cost. A mathematical formulation is developed to represent the problem and will be solved by two metaheuristics; Variable Neighborhood Search (VNS) and Tabu Search (TS). These algorithms will be coded in C ++ programming and run using 56's Solomon instances with some modification. The outcome of this experiment can be interpreted as the quality criteria of the different approximation methods. The comparison done shown that VNS gave the better results while consuming reasonable computational efforts.

  15. Improvement of algorithms for digital real-time n-γ discrimination

    NASA Astrophysics Data System (ADS)

    Wang, Song; Xu, Peng; Lu, Chang-Bing; Huo, Yong-Gang; Zhang, Jun-Jie

    2016-02-01

    Three algorithms (the Charge Comparison Method, n-γ Model Analysis and the Centroid Algorithm) have been revised to improve their accuracy and broaden the scope of applications to real-time digital n-γ discrimination. To evaluate the feasibility of the revised algorithms, a comparison between the improved and original versions of each is presented. To select an optimal real-time discrimination algorithm from these six algorithms (improved and original), the figure-of-merit (FOM), Peak-Threshold Ratio (PTR), Error Probability (EP) and Simulation Time (ST) for each were calculated to obtain a quantitatively comprehensive assessment of their performance. The results demonstrate that the improved algorithms have a higher accuracy, with an average improvement of 10% in FOM, 95% in PTR and 25% in EP, but all the STs are increased. Finally, the Adjustable Centroid Algorithm (ACA) is selected as the optimal algorithm for real-time digital n-γ discrimination.

  16. Execution and executability

    NASA Astrophysics Data System (ADS)

    Bradford, Robert W.; Harrison, Denise

    2015-09-01

    "We have a new strategy to grow our organization." Developing the plan is just the start. Implementing it in the organization is the real challenge. Many organizations don't fail due to lack of strategy; they struggle because it isn't effectively implemented. After working with hundreds of companies on strategy development, Denise and Robert have distilled the critical areas where organizations need to focus in order to enhance profitability through superior execution. If these questions are important to your organization, you'll find useful answers in the following articles: Do you find yourself overwhelmed by too many competing priorities? How do you limit how many strategic initiatives/projects your organization is working on at one time? How do you balance your resource requirements (time and money) with the availability of these resources? How do you balance your strategic initiative requirements with the day-to-day requirements of your organization?

  17. A Time Series Approach to Random Number Generation: Using Recurrence Quantification Analysis to Capture Executive Behavior

    PubMed Central

    Oomens, Wouter; Maes, Joseph H. R.; Hasselman, Fred; Egger, Jos I. M.

    2015-01-01

    The concept of executive functions plays a prominent role in contemporary experimental and clinical studies on cognition. One paradigm used in this framework is the random number generation (RNG) task, the execution of which demands aspects of executive functioning, specifically inhibition and working memory. Data from the RNG task are best seen as a series of successive events. However, traditional RNG measures that are used to quantify executive functioning are mostly summary statistics referring to deviations from mathematical randomness. In the current study, we explore the utility of recurrence quantification analysis (RQA), a non-linear method that keeps the entire sequence intact, as a better way to describe executive functioning compared to traditional measures. To this aim, 242 first- and second-year students completed a non-paced RNG task. Principal component analysis of their data showed that traditional and RQA measures convey more or less the same information. However, RQA measures do so more parsimoniously and have a better interpretation. PMID:26097449

  18. A Time Series Approach to Random Number Generation: Using Recurrence Quantification Analysis to Capture Executive Behavior.

    PubMed

    Oomens, Wouter; Maes, Joseph H R; Hasselman, Fred; Egger, Jos I M

    2015-01-01

    The concept of executive functions plays a prominent role in contemporary experimental and clinical studies on cognition. One paradigm used in this framework is the random number generation (RNG) task, the execution of which demands aspects of executive functioning, specifically inhibition and working memory. Data from the RNG task are best seen as a series of successive events. However, traditional RNG measures that are used to quantify executive functioning are mostly summary statistics referring to deviations from mathematical randomness. In the current study, we explore the utility of recurrence quantification analysis (RQA), a non-linear method that keeps the entire sequence intact, as a better way to describe executive functioning compared to traditional measures. To this aim, 242 first- and second-year students completed a non-paced RNG task. Principal component analysis of their data showed that traditional and RQA measures convey more or less the same information. However, RQA measures do so more parsimoniously and have a better interpretation. PMID:26097449

  19. Computationally efficient algorithms for real-time attitude estimation

    NASA Technical Reports Server (NTRS)

    Pringle, Steven R.

    1993-01-01

    For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.

  20. Time scale algorithm: Definition of ensemble time and possible uses of the Kalman filter

    NASA Technical Reports Server (NTRS)

    Tavella, Patrizia; Thomas, Claudine

    1990-01-01

    The comparative study of two time scale algorithms, devised to satisfy different but related requirements, is presented. They are ALGOS(BIPM), producing the international reference TAI at the Bureau International des Poids et Mesures, and AT1(NIST), generating the real-time time scale AT1 at the National Institute of Standards and Technology. In each case, the time scale is a weighted average of clock readings, but the weight determination and the frequency prediction are different because they are adapted to different purposes. The possibility of using a mathematical tool, such as the Kalman filter, together with the definition of the time scale as a weighted average, is also analyzed. Results obtained by simulation are presented.

  1. Bayesian Optimization Algorithm, Population Sizing, and Time to Convergence

    SciTech Connect

    Pelikan, M.; Goldberg, D.E.; Cantu-Paz, E.

    2000-01-19

    This paper analyzes convergence properties of the Bayesian optimization algorithm (BOA). It settles the BOA into the framework of problem decomposition used frequently in order to model and understand the behavior of simple genetic algorithms. The growth of the population size and the number of generations until convergence with respect to the size of a problem is theoretically analyzed. The theoretical results are supported by a number of experiments.

  2. O(1) time algorithms for computing histogram and Hough transform on a cross-bridge reconfigurable array of processors

    SciTech Connect

    Kao, T.; Horng, S.; Wang, Y.

    1995-04-01

    Instead of using the base-2 number system, we use a base-m number system to represent the numbers used in the proposed algorithms. Such a strategy can be used to design an O(T) time, T = (log(sub m) N) + 1, prefix sum algorithm for a binary sequence with N-bit on a cross-bridge reconfigurable array of processors using N processors, where the data bus is m-bit wide. Then, this basic operation can be used to compute the histogram of an n x n image with G gray-level value in constant time using G x n x n processors, and compute the Hough transform of an image with N edge pixels and n x n parameter space in constant time using n x n x N processors, respectively. This result is better than the previously known results proposed in the literature. Also, the execution time of the proposed algorithms is tunable by the bus bandwidth. 43 refs.

  3. Time series change detection: Algorithms for land cover change

    NASA Astrophysics Data System (ADS)

    Boriah, Shyam

    can be used for decision making and policy planning purposes. In particular, previous change detection studies have primarily relied on examining differences between two or more satellite images acquired on different dates. Thus, a technological solution that detects global land cover change using high temporal resolution time series data will represent a paradigm-shift in the field of land cover change studies. To realize these ambitious goals, a number of computational challenges in spatio-temporal data mining need to be addressed. Specifically, analysis and discovery approaches need to be cognizant of climate and ecosystem data characteristics such as seasonality, non-stationarity/inter-region variability, multi-scale nature, spatio-temporal autocorrelation, high-dimensionality and massive data size. This dissertation, a step in that direction, translates earth science challenges to computer science problems, and provides computational solutions to address these problems. In particular, three key technical capabilities are developed: (1) Algorithms for time series change detection that are effective and can scale up to handle the large size of earth science data; (2) Change detection algorithms that can handle large numbers of missing and noisy values present in satellite data sets; and (3) Spatio-temporal analysis techniques to identify the scale and scope of disturbance events.

  4. A polynomial time biclustering algorithm for finding approximate expression patterns in gene expression time series

    PubMed Central

    Madeira, Sara C; Oliveira, Arlindo L

    2009-01-01

    Background The ability to monitor the change in expression patterns over time, and to observe the emergence of coherent temporal responses using gene expression time series, obtained from microarray experiments, is critical to advance our understanding of complex biological processes. In this context, biclustering algorithms have been recognized as an important tool for the discovery of local expression patterns, which are crucial to unravel potential regulatory mechanisms. Although most formulations of the biclustering problem are NP-hard, when working with time series expression data the interesting biclusters can be restricted to those with contiguous columns. This restriction leads to a tractable problem and enables the design of efficient biclustering algorithms able to identify all maximal contiguous column coherent biclusters. Methods In this work, we propose e-CCC-Biclustering, a biclustering algorithm that finds and reports all maximal contiguous column coherent biclusters with approximate expression patterns in time polynomial in the size of the time series gene expression matrix. This polynomial time complexity is achieved by manipulating a discretized version of the original matrix using efficient string processing techniques. We also propose extensions to deal with missing values, discover anticorrelated and scaled expression patterns, and different ways to compute the errors allowed in the expression patterns. We propose a scoring criterion combining the statistical significance of expression patterns with a similarity measure between overlapping biclusters. Results We present results in real data showing the effectiveness of e-CCC-Biclustering and its relevance in the discovery of regulatory modules describing the transcriptomic expression patterns occurring in Saccharomyces cerevisiae in response to heat stress. In particular, the results show the advantage of considering approximate patterns when compared to state of the art methods that require

  5. Executive Function and Mathematics Achievement: Are Effects Construct- and Time-General or Specific?

    ERIC Educational Resources Information Center

    Duncan, Robert; Nguyen, Tutrang; Miao, Alicia; McClelland, Megan; Bailey, Drew

    2016-01-01

    Executive function (EF) is considered a set of interrelated cognitive processes, including inhibitory control, working memory, and attentional shifting, that are connected to the development of the prefrontal cortex and contribute to children's problem solving skills and self regulatory behavior (Best & Miller, 2010; Garon, Bryson, &…

  6. Real-time intelligent pattern recognition algorithm for surface EMG signals

    PubMed Central

    Khezri, Mahdi; Jahed, Mehran

    2007-01-01

    Background Electromyography (EMG) is the study of muscle function through the inquiry of electrical signals that the muscles emanate. EMG signals collected from the surface of the skin (Surface Electromyogram: sEMG) can be used in different applications such as recognizing musculoskeletal neural based patterns intercepted for hand prosthesis movements. Current systems designed for controlling the prosthetic hands either have limited functions or can only be used to perform simple movements or use excessive amount of electrodes in order to achieve acceptable results. In an attempt to overcome these problems we have proposed an intelligent system to recognize hand movements and have provided a user assessment routine to evaluate the correctness of executed movements. Methods We propose to use an intelligent approach based on adaptive neuro-fuzzy inference system (ANFIS) integrated with a real-time learning scheme to identify hand motion commands. For this purpose and to consider the effect of user evaluation on recognizing hand movements, vision feedback is applied to increase the capability of our system. By using this scheme the user may assess the correctness of the performed hand movement. In this work a hybrid method for training fuzzy system, consisting of back-propagation (BP) and least mean square (LMS) is utilized. Also in order to optimize the number of fuzzy rules, a subtractive clustering algorithm has been developed. To design an effective system, we consider a conventional scheme of EMG pattern recognition system. To design this system we propose to use two different sets of EMG features, namely time domain (TD) and time-frequency representation (TFR). Also in order to decrease the undesirable effects of the dimension of these feature sets, principle component analysis (PCA) is utilized. Results In this study, the myoelectric signals considered for classification consists of six unique hand movements. Features chosen for EMG signal are time and time

  7. Fast time-reversible algorithms for molecular dynamics of rigid-body systems.

    PubMed

    Kajima, Yasuhiro; Hiyama, Miyabi; Ogata, Shuji; Kobayashi, Ryo; Tamura, Tomoyuki

    2012-06-21

    In this paper, we present time-reversible simulation algorithms for rigid bodies in the quaternion representation. By advancing a time-reversible algorithm [Y. Kajima, M. Hiyama, S. Ogata, and T. Tamura, J. Phys. Soc. Jpn. 80, 114002 (2011)] that requires iterations in calculating the angular velocity at each time step, we propose two kinds of iteration-free fast time-reversible algorithms. They are easily implemented in codes. The codes are compared with that of existing algorithms through demonstrative simulation of a nanometer-sized water droplet to find their stability of the total energy and computation speeds. PMID:22779579

  8. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  9. A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance

    NASA Technical Reports Server (NTRS)

    Weiss, Marc A.; Greenhall, Charles A.

    1996-01-01

    An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.

  10. Addition of random run FM noise to the KPW time scale algorithm

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2002-01-01

    The KPW (Kalman plus weights) time scale algorithm uses a Kalman filter to provide frequency and drift information to a basic time scale equation. This paper extends the algorithm to three-state clocks nd gives results for a simulated eight-clock ensemble.

  11. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  12. A novel algorithm for real-time adaptive signal detection and identification

    SciTech Connect

    Sleefe, G.E.; Ladd, M.D.; Gallegos, D.E.; Sicking, C.W.; Erteza, I.A.

    1998-04-01

    This paper describes a novel digital signal processing algorithm for adaptively detecting and identifying signals buried in noise. The algorithm continually computes and updates the long-term statistics and spectral characteristics of the background noise. Using this noise model, a set of adaptive thresholds and matched digital filters are implemented to enhance and detect signals that are buried in the noise. The algorithm furthermore automatically suppresses coherent noise sources and adapts to time-varying signal conditions. Signal detection is performed in both the time-domain and the frequency-domain, thereby permitting the detection of both broad-band transients and narrow-band signals. The detection algorithm also provides for the computation of important signal features such as amplitude, timing, and phase information. Signal identification is achieved through a combination of frequency-domain template matching and spectral peak picking. The algorithm described herein is well suited for real-time implementation on digital signal processing hardware. This paper presents the theory of the adaptive algorithm, provides an algorithmic block diagram, and demonstrate its implementation and performance with real-world data. The computational efficiency of the algorithm is demonstrated through benchmarks on specific DSP hardware. The applications for this algorithm, which range from vibration analysis to real-time image processing, are also discussed.

  13. Efficient time-slot assignment algorithms for SS/TDMA systems with variable-bandwidth beams

    NASA Astrophysics Data System (ADS)

    Chalasani, Suresh; Varma, Anujan

    1994-02-01

    In this paper, we present efficient sequential and parallel algorithms for computation of time-slot assignments in SS/TDMA (satellite-switched /time-division multiple-access) systems with variable-bandwidth beams. These algorithms are based on modeling the time-slot assignment (TSA) problem as a network-flow problem. Our sequential algorithm, in general, has a better time-complexity than a previous algorithm due to Gopal, et al. and generates fewer switching matrices. If M (N) is the number of uplink (downlink) beams, L is the length of any optimal TSA, and alpha is the maximum bandwidth of an uplink or downlink beam, our sequential algorithm takes O ((M x N)(exp 3)) min(MN alpha, L) time to compute an optimal TSA when the traffic-handling capacity of the satellite is of the same order as the total bandwidth of the links.

  14. A shortest path algorithm for satellite time-varying topological network

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Liu, Zhongkan; Zhuang, Jun

    2005-11-01

    Mobile satellite network is a special time-varying network. It is different from the classical fixed network and other time-dependent networks which have been studied. Therefore some classical network theories, such as the shortest path algorithm, can not be applied to it availably. However, no study about its shortest path problem has been done. In this paper, based on the proposed model of satellite time-varying topological network, the classical shortest path algorithm of fixed network, such as the Dijkstra algorithm, is proved to be restrictive when it is applied in satellite network. Here, a novel shortest path algorithm for satellite time-varying topological network is given and optimized. Correlative simulation indicates that this algorithm can be effectively applied to the satellite time-varying topological network.

  15. Run-time scheduling and execution of loops on message passing machines

    NASA Technical Reports Server (NTRS)

    Crowley, Kay; Saltz, Joel; Mirchandaney, Ravi; Berryman, Harry

    1989-01-01

    Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.

  16. Run-time scheduling and execution of loops on message passing machines

    NASA Technical Reports Server (NTRS)

    Saltz, Joel; Crowley, Kathleen; Mirchandaney, Ravi; Berryman, Harry

    1990-01-01

    Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.

  17. A Class Of Iterative Thresholding Algorithms For Real-Time Image Segmentation

    NASA Astrophysics Data System (ADS)

    Hassan, M. H.

    1989-03-01

    Thresholding algorithms are developed for segmenting gray-level images under nonuniform illumination. The algorithms are based on learning models generated from recursive digital filters which yield to continuously varying threshold tracking functions. A real-time region growing algorithm, which locates the objects in the image while thresholding, is developed and implemented. The algorithms work in a raster-scan format, thus making them attractive for real-time image segmentation in situations requiring fast data throughput such as robot vision and character recognition.

  18. Ongoing Activity in Temporally Coherent Networks Predicts Intra-Subject Fluctuation of Response Time to Sporadic Executive Control Demands

    PubMed Central

    Nozawa, Takayuki; Sugiura, Motoaki; Yokoyama, Ryoichi; Ihara, Mizuki; Kotozaki, Yuka; Miyauchi, Carlos Makoto; Kanno, Akitake; Kawashima, Ryuta

    2014-01-01

    Can ongoing fMRI BOLD signals predict fluctuations in swiftness of a person’s response to sporadic cognitive demands? This is an important issue because it clarifies whether intrinsic brain dynamics, for which spatio-temporal patterns are expressed as temporally coherent networks (TCNs), have effects not only on sensory or motor processes, but also on cognitive processes. Predictivity has been affirmed, although to a limited extent. Expecting a predictive effect on executive performance for a wider range of TCNs constituting the cingulo-opercular, fronto-parietal, and default mode networks, we conducted an fMRI study using a version of the color–word Stroop task that was specifically designed to put a higher load on executive control, with the aim of making its fluctuations more detectable. We explored the relationships between the fluctuations in ongoing pre-trial activity in TCNs and the task response time (RT). The results revealed the existence of TCNs in which fluctuations in activity several seconds before the onset of the trial predicted RT fluctuations for the subsequent trial. These TCNs were distributed in the cingulo-opercular and fronto-parietal networks, as well as in perceptual and motor networks. Our results suggest that intrinsic brain dynamics in these networks constitute “cognitive readiness,” which plays an active role especially in situations where information for anticipatory attention control is unavailable. Fluctuations in these networks lead to fluctuations in executive control performance. PMID:24901995

  19. Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed

    NASA Technical Reports Server (NTRS)

    Tian, Ye; Song, Qi; Cattafesta, Louis

    2005-01-01

    This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.

  20. Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving

    2014-02-01

    The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.

  1. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGESBeta

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  2. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  3. The contribution of children's time-specific and longitudinal expressive language skills on developmental trajectories of executive function.

    PubMed

    Kuhn, Laura J; Willoughby, Michael T; Vernon-Feagans, Lynne; Blair, Clancy B

    2016-08-01

    To investigate whether children's early language skills support the development of executive functions (EFs), the current study used an epidemiological sample (N=1121) to determine whether two key language indicators, vocabulary and language complexity, were predictive of EF abilities over the preschool years. We examined vocabulary and language complexity both as time-varying covariates that predicted time-specific indicators of EF at 36 and 60 months of age and as time-invariant covariates that predicted children's EF at 60 months and change in EF from 36 to 60 months. We found that the rate of change in children's vocabulary between 15 and 36 months was associated with both the trajectory of EF from 36 to 60 months and the resulting abilities at 60 months. In contrast, children's language complexity had a time-specific association with EF only at 60 months. These findings suggest that children's early gains in vocabulary may be particularly relevant for emerging EF abilities. PMID:27101154

  4. MEPSA: A flexible peak search algorithm designed for uniformly spaced time series

    NASA Astrophysics Data System (ADS)

    Guidorzi, C.

    2015-04-01

    We present a novel algorithm aimed at identifying peaks within a uniformly sampled time series affected by uncorrelated Gaussian noise. The algorithm, called "MEPSA" (multiple excess peak search algorithm), essentially scans the time series at different timescales by comparing a given peak candidate with a variable number of adjacent bins. While this has originally been conceived for the analysis of gamma-ray burst light (GRB) curves, its usage can be readily extended to other astrophysical transient phenomena, whose activity is recorded through different surveys. We tested and validated it through simulated featureless profiles as well as simulated GRB time profiles. We showcase the algorithm's potential by comparing with the popular algorithm by Li and Fenimore, that is frequently adopted in the literature. Thanks to its high flexibility, the mask of excess patterns used by MEPSA can be tailored and optimised to the kind of data to be analysed without modifying the code. The C code is made publicly available.

  5. Towards Run-time Assurance of Advanced Propulsion Algorithms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  6. Genetic algorithms for adaptive real-time control in space systems

    NASA Technical Reports Server (NTRS)

    Vanderzijp, J.; Choudry, A.

    1988-01-01

    Genetic Algorithms that are used for learning as one way to control the combinational explosion associated with the generation of new rules are discussed. The Genetic Algorithm approach tends to work best when it can be applied to a domain independent knowledge representation. Applications to real time control in space systems are discussed.

  7. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1983-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.

  8. Mapping algorithm for 360-deg profilometry with time delayed integration imaging

    NASA Astrophysics Data System (ADS)

    Asundi, Anand K.; Zhou, Wensen

    1999-02-01

    A direct phase-to-radial distance mapping algorithm for 360 deg profilometry with time delay and integration imaging is presented. This method, based on an inherent mapping relationship, is capable of speedy and accurate measurement without the determination of any geometric parameter. The capability of the mapping algorithm is demonstrated by measuring a plane and a shoe.

  9. Scaling of the running time of the quantum adiabatic algorithm for propositional satisfiability

    SciTech Connect

    Znidaric, Marko

    2005-06-15

    We numerically study the quantum adiabatic algorithm for propositional satisfiability. A new class of previously unknown hard instances is identified among random problems. We numerically find that the running time for such instances grows exponentially with their size. The worst case complexity of the quantum adiabatic algorithm therefore seems to be exponential.

  10. On the Time Complexity of Dijkstra's Three-State Mutual Exclusion Algorithm

    NASA Astrophysics Data System (ADS)

    Kimoto, Masahiro; Tsuchiya, Tatsuhiro; Kikuno, Tohru

    In this letter we give a lower bound on the worst-case time complexity of Dijkstra's three-state mutual exclusion algorithm by specifying a concrete behavior of the algorithm. We also show that our result is more accurate than the known best bound.

  11. A real-time FORTRAN implementation of a sensor failure detection, isolation and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.

    1984-01-01

    An advanced, sensor failure detection, isolation, and accomodation algorithm has been developed by NASA for the F100 turbofan engine. The algorithm takes advantage of the analytical redundancy of the sensors to improve the reliability of the sensor set. The method requires the controls computer, to determine when a sensor failure has occurred without the help of redundant hardware sensors in the control system. The controls computer provides an estimate of the correct value of the output of the failed sensor. The algorithm has been programmed in FORTRAN using a real-time microprocessor-based controls computer. A detailed description of the algorithm and its implementation on a microprocessor is given.

  12. Real-time robot deliberation by compilation and monitoring of anytime algorithms

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo

    1994-01-01

    Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.

  13. Algorithm to extract the spanning clusters and calculate conductivity in strip geometries

    NASA Astrophysics Data System (ADS)

    Babalievski, F.

    1995-06-01

    I present an improved algorithm to solve the random resistor problem using a transfer-matrix technique. Preconditioning by spanning cluster extraction both reduces the size of the matrix and yields faster execution times when compared to previous algorithms.

  14. Scaling Time Warp-based Discrete Event Execution to 104 Processors on Blue Gene Supercomputer

    SciTech Connect

    Perumalla, Kalyan S

    2007-01-01

    Lately, important large-scale simulation applications, such as emergency/event planning and response, are emerging that are based on discrete event models. The applications are characterized by their scale (several millions of simulated entities), their fine-grained nature of computation (microseconds per event), and their highly dynamic inter-entity event interactions. The desired scale and speed together call for highly scalable parallel discrete event simulation (PDES) engines. However, few such parallel engines have been designed or tested on platforms with thousands of processors. Here an overview is given of a unique PDES engine that has been designed to support Time Warp-style optimistic parallel execution as well as a more generalized mixed, optimistic-conservative synchronization. The engine is designed to run on massively parallel architectures with minimal overheads. A performance study of the engine is presented, including the first results to date of PDES benchmarks demonstrating scalability to as many as 16,384 processors, on an IBM Blue Gene supercomputer. The results show, for the first time, the promise of effectively sustaining very large scale discrete event execution on up to 104 processors.

  15. Algorithms for Blind Components Separation and Extraction from the Time-Frequency Distribution of Their Mixture

    NASA Astrophysics Data System (ADS)

    Barkat, B.; Abed-Meraim, K.

    2004-12-01

    We propose novel algorithms to select and extract separately all the components, using the time-frequency distribution (TFD), of a given multicomponent frequency-modulated (FM) signal. These algorithms do not use any a priori information about the various components. However, their performances highly depend on the cross-terms suppression ability and high time-frequency resolution of the considered TFD. To illustrate the usefulness of the proposed algorithms, we applied them for the estimation of the instantaneous frequency coefficients of a multicomponent signal and the results are compared with those of the higher-order ambiguity function (HAF) algorithm. Monte Carlo simulation results show the superiority of the proposed algorithms over the HAF.

  16. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-01-01

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291

  17. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model

    PubMed Central

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-01-01

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291

  18. False-nearest-neighbors algorithm and noise-corrupted time series

    NASA Astrophysics Data System (ADS)

    Rhodes, Carl; Morari, Manfred

    1997-05-01

    The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented.

  19. Parallel algorithms for simulating continuous time Markov chains

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  20. Higher-Order, Space-Time Adaptive Finite Volume Methods: Algorithms, Analysis and Applications

    SciTech Connect

    Minion, Michael

    2014-04-29

    The four main goals outlined in the proposal for this project were: 1. Investigate the use of higher-order (in space and time) finite-volume methods for fluid flow problems. 2. Explore the embedding of iterative temporal methods within traditional block-structured AMR algorithms. 3. Develop parallel in time methods for ODEs and PDEs. 4. Work collaboratively with the Center for Computational Sciences and Engineering (CCSE) at Lawrence Berkeley National Lab towards incorporating new algorithms within existing DOE application codes.

  1. Piloted simulation of an algorithm for onboard control of time-optimal intercept

    NASA Technical Reports Server (NTRS)

    Price, D. B.; Calise, A. J.; Moerder, D. D.

    1985-01-01

    A piloted simulation of algorithms for onboard computation of trajectories for time-optimal intercept of a moving target by an F-8 aircraft is described. The algorithms, use singular perturbation techniques, generate commands in the cockpit. By centering the horizontal and vertical needles, the pilot flies an approximation to a time-optimal intercept trajectory. Example simulations are shown and statistical data on the pilot's performance when presented with different display and computation modes are described.

  2. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method. PMID:15593379

  3. A compensatory algorithm for the slow-down effect on constant-time-separation approaches

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    1991-01-01

    In seeking methods to improve airport capacity, the question arose as to whether an electronic display could provide information which would enable the pilot to be responsible for self-separation under instrument conditions to allow for the practical implementation of reduced separation, multiple glide path approaches. A time based, closed loop algorithm was developed and simulator validated for in-trail (one aircraft behind the other) approach and landing. The algorithm was designed to reduce the effects of approach speed reduction prior to landing for the trailing aircraft as well as the dispersion of the interarrival times. The operational task for the validation was an instrument approach to landing while following a single lead aircraft on the same approach path. The desired landing separation was 60 seconds for these approaches. An open loop algorithm, previously developed, was used as a basis for comparison. The results showed that relative to the open loop algorithm, the closed loop one could theoretically provide for a 6 pct. increase in runway throughput. Also, the use of the closed loop algorithm did not affect the path tracking performance and pilot comments indicated that the guidance from the closed loop algorithm would be acceptable from an operational standpoint. From these results, it is concluded that by using a time based, closed loop spacing algorithm, precise interarrival time intervals may be achievable with operationally acceptable pilot workload.

  4. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    SciTech Connect

    Chandola, Varun; Vatsavai, Raju

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  5. A Hybrid Algorithm for Clustering of Time Series Data Based on Affinity Search Technique

    PubMed Central

    Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A.; Shaygan, Mohammad Amin; Jalali, Alireza

    2014-01-01

    Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets. PMID:24982966

  6. Robust algorithm for estimation of time-varying transfer functions.

    PubMed

    Zou, Rui; Chon, Ki H

    2004-02-01

    We introduce a new method to estimate reliable time-varying (TV) transfer functions (TFs) and TV impulse response functions. The method is based on TV autoregressive moving average models in which the TV parameters are accurately obtained using the optimal parameter search method which we have previously developed. The new method is more accurate than the recursive least-squares (RLS), and remains robust even in the case of significant noise contamination. Furthermore, the new method is able to track dynamics that change abruptly, which is certainly a deficiency of the RLS. Application of the new method to renal blood pressure and flow revealed that hypertensive rats undergo more complex and TV autoregulation in maintaining stable blood flow than do normotensive rats. This observation has not been previously revealed using time-invariant TF analyses. The newly developed approach may promote the broader use of TV system identification in studies of physiological systems and makes linear and nonlinear TV modeling possible in certain cases previously thought intractable. PMID:14765694

  7. A processor-time-minimal systolic array for cubical mesh algorithms

    SciTech Connect

    Cappello, P. . Dept. of Computer Science)

    1992-01-01

    Using a directed acyclic graph (dag) model of algorithms, the paper focuses on time-minimal multiprocessor schedules that use as few processors as possible. Such a processor-time-minimal scheduling of an algorithm's dag first is illustrated using a triangular shaped 2-D directed mesh (representing, for example, an algorithm for solving a triangular system of linear equations). Then, algorithms represented by an n {times} n {times} n directed mesh are investigated. This cubical directed mesh is fundamental; it represents the standard algorithm for computing matrix product as well as many other algorithms. Completion of the cubical mesh requires 3n - 2 steps. It is shown that the number of processing elements needed to achieve this time bound is at least (3n{sup 2/4}). A systolic array for the cubical directed mesh is then presented. It completes the mesh using the minimum number of steps and exactly (3n{sup 2/4}) processing elements: it is processor-time-minimal. The systolic array's topology is that of a hexagonally shaped, cylindrically- connected, 2-D directed mesh.

  8. Executive Functions

    PubMed Central

    Diamond, Adele

    2014-01-01

    Executive functions (EFs) make possible mentally playing with ideas; taking the time to think before acting; meeting novel, unanticipated challenges; resisting temptations; and staying focused. Core EFs are inhibition [response inhibition (self-control—resisting temptations and resisting acting impulsively) and interference control (selective attention and cognitive inhibition)], working memory, and cognitive flexibility (including creatively thinking “outside the box,” seeing anything from different perspectives, and quickly and flexibly adapting to changed circumstances). The developmental progression and representative measures of each are discussed. Controversies are addressed (e.g., the relation between EFs and fluid intelligence, self-regulation, executive attention, and effortful control, and the relation between working memory and inhibition and attention). The importance of social, emotional, and physical health for cognitive health is discussed because stress, lack of sleep, loneliness, or lack of exercise each impair EFs. That EFs are trainable and can be improved with practice is addressed, including diverse methods tried thus far. PMID:23020641

  9. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693

  10. Time Estimation in Alzheimer's Disease and the Role of the Central Executive

    ERIC Educational Resources Information Center

    Papagno, Costanza; Allegra, Adele; Cardaci, Maurizio

    2004-01-01

    The aim of this study was to evaluate the role of short-term memory and attention in time estimation. For this purpose we studied prospective time verbal estimation in 21 patients with Alzheimer's disease (AD), and compared their performance with that of 21 matched normal controls in two different conditions: during a digit span task and during an…

  11. Real-time image denoising algorithm in teleradiology systems

    NASA Astrophysics Data System (ADS)

    Gupta, Pradeep Kumar; Kanhirodan, Rajan

    2006-02-01

    Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.

  12. A contourlet transform based algorithm for real-time video encoding

    NASA Astrophysics Data System (ADS)

    Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris

    2012-06-01

    In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to

  13. Riemannian mean and space-time adaptive processing using projection and inversion algorithms

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Barbaresco, Frédéric

    2013-05-01

    The estimation of the covariance matrix from real data is required in the application of space-time adaptive processing (STAP) to an airborne ground moving target indication (GMTI) radar. A natural approach to estimation of the covariance matrix that is based on the information geometry has been proposed. In this paper, the output of the Riemannian mean is used in inversion and projection algorithms. It is found that the projection class of algorithms can yield very significant gains, even when the gains due to inversion-based algorithms are marginal over standard algorithms. The performance of the projection class of algorithms does not appear to be overly sensitive to the projected subspace dimension.

  14. Active mass damper system employing time delay control algorithm for vibration mitigation of building structure

    NASA Astrophysics Data System (ADS)

    Jang, Dong-Doo; Park, Jeongsu; Jung, Hyung-Jo

    2013-04-01

    The feasibility of an active mass damper (AMD) system employing the time delay control (TDC) algorithm, which is one of the robust and adaptive control algorithms, for effectively suppressing the wind-induced vibration of a building structure is investigated. The TDC algorithm has several attractive features such as the simplicity and the excellent robustness to unknown system dynamics and disturbance. Based on the characteristics of the algorithm, it has the potential to be an effective control system for mitigating excessive vibration of civil engineering structures such as buildings, bridges and towers. However, it has not been used for structural response reduction yet. In order to verify the effectiveness of the proposed active control method combining an AMD system with the TDC algorithm, a series of labscale tests are carried out.

  15. Robust and low complexity localization algorithm based on head-related impulse responses and interaural time difference.

    PubMed

    Wan, Xinwang; Liang, Juan

    2013-01-01

    This article introduces a biologically inspired localization algorithm using two microphones, for a mobile robot. The proposed algorithm has two steps. First, the coarse azimuth angle of the sound source is estimated by cross-correlation algorithm based on interaural time difference. Then, the accurate azimuth angle is obtained by cross-channel algorithm based on head-related impulse responses. The proposed algorithm has lower computational complexity compared to the cross-channel algorithm. Experimental results illustrate that the localization performance of the proposed algorithm is better than those of the cross-correlation and cross-channel algorithms. PMID:23298016

  16. Intelligibility of time-compressed speech: the effect of uniform versus non-uniform time-compression algorithms.

    PubMed

    Schlueter, Anne; Lemke, Ulrike; Kollmeier, Birger; Holube, Inga

    2014-03-01

    For assessing hearing aid algorithms, a method is sought to shift the threshold of a speech-in-noise test to (mostly positive) signal-to-noise ratios (SNRs) that allow discrimination across algorithmic settings and are most relevant for hearing-impaired listeners in daily life. Hence, time-compressed speech with higher speech rates was evaluated to parametrically increase the difficulty of the test while preserving most of the relevant acoustical speech cues. A uniform and a non-uniform algorithm were used to compress the sentences of the German Oldenburg Sentence Test at different speech rates. In comparison, the non-uniform algorithm exhibited greater deviations from the targeted time compression, as well as greater changes of the phoneme duration, spectra, and modulation spectra. Speech intelligibility for fast Oldenburg sentences in background noise at different SNRs was determined with 48 normal-hearing listeners. The results confirmed decreasing intelligibility with increasing speech rate. Speech had to be compressed to more than 30% of its original length to reach 50% intelligibility at positive SNRs. Characteristics influencing the discrimination ability of the test for assessing effective SNR changes were investigated. Subjective and objective measures indicated a clear advantage of the uniform algorithm in comparison to the non-uniform algorithm for the application in speech-in-noise tests. PMID:24606289

  17. Intra-individual lap time variation of the 400-m walk, an early mobility indicator of executive function decline in high-functioning older adults?

    PubMed

    Tian, Qu; Resnick, Susan M; Ferrucci, Luigi; Studenski, Stephanie A

    2015-12-01

    Higher intra-individual lap time variation (LTV) of the 400-m walk is cross-sectionally associated with poorer attention in older adults. Whether higher LTV predicts decline in executive function and whether the relationship is accounted for by slower walking remain unanswered. The main objective of this study was to examine the relationship between baseline LTV and longitudinal change in executive function. We used data from 347 participants aged 60 years and older (50.7% female) from the Baltimore Longitudinal Study of Aging. Longitudinal assessments of executive function were conducted between 2007 and 2013, including attention (Trails A, Digit Span Forward Test), cognitive flexibility and set shifting (Trails B, Delta TMT: Trials B minus Trials A), visuoperceptual speed (Digit Symbol Substitution Test), and working memory (Digit Span Backward Test). LTV and mean lap time (MLT) were obtained from the 400-m walk test concurrent with the baseline executive function assessment. LTV was computed as variability of lap time across ten 40-m laps based on individual trajectories. A linear mixed-effects model was used to examine LTV in relation to changes in executive function, adjusted for age, sex, education, and MLT. Higher LTV was associated with greater decline in performance on Trails B (β = 4.322, p < 0.001) and delta TMT (β = 4.230, p < 0.001), independent of covariates. Findings remained largely unchanged after further adjustment for MLT. LTV was not associated with changes in other executive function measures (all p > 0.05). In high-functioning older adults, higher LTV in the 400-m walk predicts executive function decline involving cognitive flexibility and set shifting over a long period of time. High LTV may be an early indicator of executive function decline independent of MLT. PMID:26561401

  18. Importance of variable time-step algorithms in spatial kinetics calculations

    SciTech Connect

    Aviles, B.N.

    1994-12-31

    The use of spatial kinetics codes in conjunction with advanced thermal-hydraulics codes is becoming more widespread as better methods and faster computers appear. The integrated code packages are being used for routine nuclear power plant design and analysis, including simulations with instrumentation and control systems initiating system perturbations such as rod motion and scrams. As a result, it is important to include a robust variable time-step algorithm that can accurately and efficiently follow widely varying plant neutronic behavior. This paper describes the variable time-step algorithm in SPANDEX and compares the automatic time-step scheme with a more traditional fixed time-step scheme.

  19. Algorithmic recognition of anomalous time intervals in sea-level observations

    NASA Astrophysics Data System (ADS)

    Getmanov, V. G.; Gvishiani, A. D.; Kamaev, D. A.; Kornilov, A. S.

    2016-03-01

    The problem of the algorithmic recognition of anomalous time intervals in the time series of the sea-level observations conducted by the Russian Tsunami Warning Survey (RTWS) is considered. The normal and anomalous sea-level observations are described. The polyharmonic models describing the sea-level fluctuations on the short time intervals are constructed, and sea-level forecasting based on these models is suggested. The algorithm for the recognition of anomalous time intervals is developed and its work is tested on the real RTWS data.

  20. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    SciTech Connect

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm, designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.

  1. Two neural network algorithms for designing optimal terminal controllers with open final time

    NASA Technical Reports Server (NTRS)

    Plumer, Edward S.

    1992-01-01

    Multilayer neural networks, trained by the backpropagation through time algorithm (BPTT), have been used successfully as state-feedback controllers for nonlinear terminal control problems. Current BPTT techniques, however, are not able to deal systematically with open final-time situations such as minimum-time problems. Two approaches which extend BPTT to open final-time problems are presented. In the first, a neural network learns a mapping from initial-state to time-to-go. In the second, the optimal number of steps for each trial run is found using a line-search. Both methods are derived using Lagrange multiplier techniques. This theoretical framework is used to demonstrate that the derived algorithms are direct extensions of forward/backward sweep methods used in N-stage optimal control. The two algorithms are tested on a Zermelo problem and the resulting trajectories compare favorably to optimal control results.

  2. TaDb: A time-aware diffusion-based recommender algorithm

    NASA Astrophysics Data System (ADS)

    Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan

    2015-02-01

    Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.

  3. 3 CFR 13527 - Executive Order 13527 of December 30, 2009. Establishing Federal Capability for the Timely...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... Establishing Federal Capability for the Timely Provision of Medical Countermeasures Following a Biological... Following a Biological Attack By the authority vested in me as President by the Constitution and the laws of... American people in the event of a biological attack in the United States through a rapid Federal...

  4. A double dissociation between accuracy and time of execution on attentional tasks in Alzheimer's disease and multi-infarct dementia.

    PubMed

    Gainotti, G; Marra, C; Villa, G

    2001-04-01

    Two cancellation/attentional tasks: (i) Lines Cancellation (LC) and Multiple Features Targets Cancellation (MFTC) and (ii) a standard battery of neuropsychological tests, the Mental Deterioration Battery (MDB), were administered to 68 patients with dementia of the Alzheimer's type (DAT) and 40 patients with multi-infarct dementia (MID), who were accurately matched for the overall severity of dementia, and to 40 normal controls. Both accuracy and time of execution were considered in evaluating performance on the two cancellation tasks, which involved visuospatial exploration and psychomotor speed, but were differently demanding in terms of selective attention. On the first cancellation task (LC), requiring a lower attentional load, the two demented patient groups performed at the same level of accuracy. On the second cancellation task (MFTC), which was more demanding in terms of selective and divided attention, DAT patients were significantly less accurate than MID patients, making a higher number of 'false-alarm' errors. Conversely, the time employed in the execution of both LC and MFTC took longer for MID than for DAT patients, suggesting a greater impairment of psychomotor speed in MID. In the MDB, DAT patients scored significantly worse than MID patients on several measures of episodic memory (the immediate recall, delayed recall and delayed recognition of Rey's Auditory Verbal Learning Test) and on a test of visual-spatial memory. These data suggest that, while psychomotor speed and the lower (sensorimotor) levels of attention are preferentially impaired in subcortical forms of dementia such as MID, the higher levels of selective and divided attention are more markedly disrupted in the Alzheimer type of dementia. PMID:11287373

  5. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1984-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation. Previously announced in STAR as N84-13140

  6. The use of knowledge-based Genetic Algorithm for starting time optimisation in a lot-bucket MRP

    NASA Astrophysics Data System (ADS)

    Ridwan, Muhammad; Purnomo, Andi

    2016-01-01

    In production planning, Material Requirement Planning (MRP) is usually developed based on time-bucket system, a period in the MRP is representing the time and usually weekly. MRP has been successfully implemented in Make To Stock (MTS) manufacturing, where production activity must be started before customer demand is received. However, to be implemented successfully in Make To Order (MTO) manufacturing, a modification is required on the conventional MRP in order to make it in line with the real situation. In MTO manufacturing, delivery schedule to the customers is defined strictly and must be fulfilled in order to increase customer satisfaction. On the other hand, company prefers to keep constant number of workers, hence production lot size should be constant as well. Since a bucket in conventional MRP system is representing time and usually weekly, hence, strict delivery schedule could not be accommodated. Fortunately, there is a modified time-bucket MRP system, called as lot-bucket MRP system that proposed by Casimir in 1999. In the lot-bucket MRP system, a bucket is representing a lot, and the lot size is preferably constant. The time to finish every lot could be varying depends on due date of lot. Starting time of a lot must be determined so that every lot has reasonable production time. So far there is no formal method to determine optimum starting time in the lot-bucket MRP system. Trial and error process usually used for it but some time, it causes several lots have very short production time and the lot-bucket MRP would be infeasible to be executed. This paper presents the use of Genetic Algorithm (GA) for optimisation of starting time in a lot-bucket MRP system. Even though GA is well known as powerful searching algorithm, however, improvement is still required in order to increase possibility of GA in finding optimum solution in shorter time. A knowledge-based system has been embedded in the proposed GA as the improvement effort, and it is proven that the

  7. A pheromone-rate-based analysis on the convergence time of ACO algorithm.

    PubMed

    Huang, Han; Wu, Chun-Guo; Hao, Zhi-Feng

    2009-08-01

    Ant colony optimization (ACO) has widely been applied to solve combinatorial optimization problems in recent years. There are few studies, however, on its convergence time, which reflects how many iteration times ACO algorithms spend in converging to the optimal solution. Based on the absorbing Markov chain model, we analyze the ACO convergence time in this paper. First, we present a general result for the estimation of convergence time to reveal the relationship between convergence time and pheromone rate. This general result is then extended to a two-step analysis of the convergence time, which includes the following: 1) the iteration time that the pheromone rate spends on reaching the objective value and 2) the convergence time that is calculated with the objective pheromone rate in expectation. Furthermore, four brief ACO algorithms are investigated by using the proposed theoretical results as case studies. Finally, the conclusions of the case studies that the pheromone rate and its deviation determine the expected convergence time are numerically verified with the experiment results of four one-ant ACO algorithms and four ten-ant ACO algorithms. PMID:19380276

  8. A novel adaptive, real-time algorithm to detect gait events from wearable sensors.

    PubMed

    Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona

    2015-05-01

    A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices. PMID:25069118

  9. An algorithm for a single machine scheduling problem with sequence dependent setup times and scheduling windows

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1975-01-01

    An enumeration algorithm is presented for solving a scheduling problem similar to the single machine job shop problem with sequence dependent setup times. The scheduling problem differs from the job shop problem in two ways. First, its objective is to select an optimum subset of the available tasks to be performed during a fixed period of time. Secondly, each task scheduled is constrained to occur within its particular scheduling window. The algorithm is currently being used to develop typical observational timelines for a telescope that will be operated in earth orbit. Computational times associated with timeline development are presented.

  10. Gray matter volume and executive functioning correlate with time since injury following mild traumatic brain injury.

    PubMed

    Killgore, William D S; Singh, Prabhjyot; Kipman, Maia; Pisner, Derek; Fridman, Andrew; Weber, Mareen

    2016-01-26

    Most people who sustain a mild traumatic brain injury (mTBI) will recover to baseline functioning within a period of several days to weeks. A substantial minority of patients, however, will show persistent symptoms and mild cognitive complaints for much longer. To more clearly delineate how the duration of time since injury (TSI) is associated with neuroplastic cortical volume changes and cognitive recovery, we employed voxel-based morphometry (VBM) and select neuropsychological measures in a cross-sectional sample of 26 patients with mTBI assessed at either two-weeks, one-month, three-months, six-months, or one-year post injury, and a sample of 12 healthy controls. Longer duration of TSI was associated with larger gray matter volume (GMV) within the ventromedial prefrontal cortex (vmPFC) and right fusiform gyrus, and better neurocognitive performance on measures of visuospatial design fluency and emotional functioning. In particular, volume within the vmPFC was positively correlated with design fluency and negatively correlated with symptoms of anxiety, whereas GMV of the fusiform gyrus was associated with greater design fluency and sustained visual psychomotor vigilance performance. Moreover, the larger GMV seen among the more chronic individuals was significantly greater than healthy controls, suggesting possible enlargement of these regions with time since injury. These findings are interpreted in light of burgeoning evidence suggesting that cortical regions often exhibit structural changes following experience or practice, and suggest that with greater time since an mTBI, the brain displays compensatory remodeling of cortical regions involved in emotional regulation, which may reduce distractibility during attention demanding visuo-motor tasks. PMID:26711488

  11. Sequential Insertion Heuristic with Adaptive Bee Colony Optimisation Algorithm for Vehicle Routing Problem with Time Windows

    PubMed Central

    Jawarneh, Sana; Abdullah, Salwani

    2015-01-01

    This paper presents a bee colony optimisation (BCO) algorithm to tackle the vehicle routing problem with time window (VRPTW). The VRPTW involves recovering an ideal set of routes for a fleet of vehicles serving a defined number of customers. The BCO algorithm is a population-based algorithm that mimics the social communication patterns of honeybees in solving problems. The performance of the BCO algorithm is dependent on its parameters, so the online (self-adaptive) parameter tuning strategy is used to improve its effectiveness and robustness. Compared with the basic BCO, the adaptive BCO performs better. Diversification is crucial to the performance of the population-based algorithm, but the initial population in the BCO algorithm is generated using a greedy heuristic, which has insufficient diversification. Therefore the ways in which the sequential insertion heuristic (SIH) for the initial population drives the population toward improved solutions are examined. Experimental comparisons indicate that the proposed adaptive BCO-SIH algorithm works well across all instances and is able to obtain 11 best results in comparison with the best-known results in the literature when tested on Solomon’s 56 VRPTW 100 customer instances. Also, a statistical test shows that there is a significant difference between the results. PMID:26132158

  12. A Parallel Algorithm for the Two-Dimensional Time Fractional Diffusion Equation with Implicit Difference Method

    PubMed Central

    Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(MxMyN2). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16–4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future. PMID:24744680

  13. Automatic, Real-Time Algorithms for Anomaly Detection in High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Srivastava, A. N.; Nemani, R. R.; Votava, P.

    2008-12-01

    Earth observing satellites are generating data at an unprecedented rate, surpassing almost all other data intensive applications. However, most of the data that arrives from the satellites is not analyzed directly. Rather, multiple scientific teams analyze only a small fraction of the total data available in the data stream. Although there are many reasons for this situation one paramount concern is developing algorithms and methods that can analyze the vast, high dimensional, streaming satellite images. This paper describes a new set of methods that are among the fastest available algorithms for real-time anomaly detection. These algorithms were built to maximize accuracy and speed for a variety of applications in fields outside of the earth sciences. However, our studies indicate that with appropriate modifications, these algorithms can be extremely valuable for identifying anomalies rapidly using only modest computational power. We review two algorithms which are used as benchmarks in the field: Orca, One-Class Support Vector Machines and discuss the anomalies that are discovered in MODIS data taken over the Central California region. We are especially interested in automatic identification of disturbances within the ecosystems (e,g, wildfires, droughts, floods, insect/pest damage, wind damage, logging). We show the scalability of the algorithms and demonstrate that with appropriately adapted technology, the dream of real-time analysis can be made a reality.

  14. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future. PMID:24744680

  15. Real-time Imaging Orientation Determination System to Verify Imaging Polarization Navigation Algorithm

    PubMed Central

    Lu, Hao; Zhao, Kaichun; Wang, Xiaochu; You, Zheng; Huang, Kaoli

    2016-01-01

    Bio-inspired imaging polarization navigation which can provide navigation information and is capable of sensing polarization information has advantages of high-precision and anti-interference over polarization navigation sensors that use photodiodes. Although all types of imaging polarimeters exist, they may not qualify for the research on the imaging polarization navigation algorithm. To verify the algorithm, a real-time imaging orientation determination system was designed and implemented. Essential calibration procedures for the type of system that contained camera parameter calibration and the inconsistency of complementary metal oxide semiconductor calibration were discussed, designed, and implemented. Calibration results were used to undistort and rectify the multi-camera system. An orientation determination experiment was conducted. The results indicated that the system could acquire and compute the polarized skylight images throughout the calibrations and resolve orientation by the algorithm to verify in real-time. An orientation determination algorithm based on image processing was tested on the system. The performance and properties of the algorithm were evaluated. The rate of the algorithm was over 1 Hz, the error was over 0.313°, and the population standard deviation was 0.148° without any data filter. PMID:26805851

  16. Real-time Imaging Orientation Determination System to Verify Imaging Polarization Navigation Algorithm.

    PubMed

    Lu, Hao; Zhao, Kaichun; Wang, Xiaochu; You, Zheng; Huang, Kaoli

    2016-01-01

    Bio-inspired imaging polarization navigation which can provide navigation information and is capable of sensing polarization information has advantages of high-precision and anti-interference over polarization navigation sensors that use photodiodes. Although all types of imaging polarimeters exist, they may not qualify for the research on the imaging polarization navigation algorithm. To verify the algorithm, a real-time imaging orientation determination system was designed and implemented. Essential calibration procedures for the type of system that contained camera parameter calibration and the inconsistency of complementary metal oxide semiconductor calibration were discussed, designed, and implemented. Calibration results were used to undistort and rectify the multi-camera system. An orientation determination experiment was conducted. The results indicated that the system could acquire and compute the polarized skylight images throughout the calibrations and resolve orientation by the algorithm to verify in real-time. An orientation determination algorithm based on image processing was tested on the system. The performance and properties of the algorithm were evaluated. The rate of the algorithm was over 1 Hz, the error was over 0.313°, and the population standard deviation was 0.148° without any data filter. PMID:26805851

  17. Results from the New IGS Time Scale Algorithm (version 2.0)

    NASA Astrophysics Data System (ADS)

    Senior, K.; Ray, J.

    2009-12-01

    Since 2004 the IGS Rapid and Final clock products have been aligned to a highly stable time scale derived from a weighted ensemble of clocks in the IGS network. The time scale is driven mostly by Hydrogen Maser ground clocks though the GPS satellite clocks also carry non-negligible weight, resulting in a time scale having a one-day frequency stability of about 1E-15. However, because of the relatively simple weighting scheme used in the time scale algorithm and because the scale is aligned to UTC by steering it to GPS Time the resulting stability beyond several days suffers. The authors present results of a new 2.0 version of the IGS time scale highlighting the improvements to the algorithm, new modeling considerations, as well as improved time scale stability.

  18. A Combination of Genetic Algorithm and Particle Swarm Optimization for Vehicle Routing Problem with Time Windows

    PubMed Central

    Xu, Sheng-Hua; Liu, Ji-Ping; Zhang, Fu-Hao; Wang, Liang; Sun, Li-Jian

    2015-01-01

    A combination of genetic algorithm and particle swarm optimization (PSO) for vehicle routing problems with time windows (VRPTW) is proposed in this paper. The improvements of the proposed algorithm include: using the particle real number encoding method to decode the route to alleviate the computation burden, applying a linear decreasing function based on the number of the iterations to provide balance between global and local exploration abilities, and integrating with the crossover operator of genetic algorithm to avoid the premature convergence and the local minimum. The experimental results show that the proposed algorithm is not only more efficient and competitive with other published results but can also obtain more optimal solutions for solving the VRPTW issue. One new well-known solution for this benchmark problem is also outlined in the following. PMID:26343655

  19. Study on algorithm and real-time implementation of infrared image processing based on FPGA

    NASA Astrophysics Data System (ADS)

    Pang, Yulin; Ding, Ruijun; Liu, Shanshan; Chen, Zhe

    2010-10-01

    With the fast development of Infrared Focal Plane Arrays (IRFPA) detectors, high quality real-time image processing becomes more important in infrared imaging system. Facing the demand of better visual effect and good performance, we find FPGA is an ideal choice of hardware to realize image processing algorithm that fully taking advantage of its high speed, high reliability and processing a great amount of data in parallel. In this paper, a new idea of dynamic linear extension algorithm is introduced, which has the function of automatically finding the proper extension range. This image enhancement algorithm is designed in Verilog HDL and realized on FPGA. It works on higher speed than serial processing device like CPU and DSP. Experiment shows that this hardware unit of dynamic linear extension algorithm enhances the visual effect of infrared image effectively.

  20. A real-time ECG data compression algorithm for a digital holter system.

    PubMed

    Lee, Sangjoon; Lee, Myoungho

    2008-01-01

    This paper describes a real time ECG compression algorithm for a digital holter system. Proposed algorithm consists of five main procedures. First procedure is to differentiate signals, second is to choose a period of the differentiated signals and store them in memory, third is to perform the DCT(Discrete Cosine Transform) on the stored data, fourth is to apply a window filter, and fifth procedure is to apply Huffman Coding compression method on the data. This developed algorithm has been tested by applying 12 ECGs(electrocardiograms) from the MIT-BIH database and the PRD(Percent RMS Difference) and the CR(Compression Ratio) are calculated. It is found that the algorithm achieved a high level of compression performance with 1.82 of PRD and 8.82:1 of CR in average. PMID:19163774

  1. A Combination of Genetic Algorithm and Particle Swarm Optimization for Vehicle Routing Problem with Time Windows.

    PubMed

    Xu, Sheng-Hua; Liu, Ji-Ping; Zhang, Fu-Hao; Wang, Liang; Sun, Li-Jian

    2015-01-01

    A combination of genetic algorithm and particle swarm optimization (PSO) for vehicle routing problems with time windows (VRPTW) is proposed in this paper. The improvements of the proposed algorithm include: using the particle real number encoding method to decode the route to alleviate the computation burden, applying a linear decreasing function based on the number of the iterations to provide balance between global and local exploration abilities, and integrating with the crossover operator of genetic algorithm to avoid the premature convergence and the local minimum. The experimental results show that the proposed algorithm is not only more efficient and competitive with other published results but can also obtain more optimal solutions for solving the VRPTW issue. One new well-known solution for this benchmark problem is also outlined in the following. PMID:26343655

  2. Methods of reducing program-execution time under RT-11 FORTRAN

    SciTech Connect

    Isidoro, R J; Trellue, R E

    1982-04-01

    The Quality Assurance (QA) Department of Sandia National Laboratories is responsible to the Department of Energy for assurance that weapons remain functional throughout their stockpile life. To accomplish this, QA conducts laboratory system tests on the Sandia-designed components of the weapon system. Joint flight tests with the Department of Defense are also conducted. The data acquisition and processing system used to acquire and analyze test results was designed by the QA Systems Test Equipment Design Division. The acquisition systems are built around PDP 11/34 computers. There are six similar acquisition systems that collect data independently from many unique weapon testers. A test usually lasts several minutes. After the data are acquired, the system engineers are interested in seeing the results as soon as possible. The complete test analysis must be known before disassembling the test equipment and moving on to the next scheduled test. If anomalies were present, disassembling would compromise posttest trouble-shooting procedures. The analysis for each test is therefore performed on the acquisition machine immediately after each test and must be completed in as short a time as possible. The FORTRAN software package used to analyze the results of laboratory system tests is considered. How the software works, problems encountered when it was decided to double the number of data acquisition channels to analyze, and the solution to the problems arrived at by benchmarking the programs with optional equipment that could be added to the existing configuration are discussed. (WHK)

  3. Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Baskaran, Subbiah; Noever, D.

    1999-01-01

    Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.

  4. Circadian Rhythms in Executive Function during the Transition to Adolescence: The Effect of Synchrony between Chronotype and Time of Day

    ERIC Educational Resources Information Center

    Hahn, Constanze; Cowell, Jason M.; Wiprzycka, Ursula J.; Goldstein, David; Ralph, Martin; Hasher, Lynn; Zelazo, Philip David

    2012-01-01

    To explore the influence of circadian rhythms on executive function during early adolescence, we administered a battery of executive function measures (including a Go-Nogo task, the Iowa Gambling Task, a Self-ordered Pointing task, and an Intra/Extradimensional Shift task) to Morning-preference and Evening-preference participants (N = 80) between…

  5. Development of a rule-based algorithm for rice cultivation mapping using Landsat 8 time series

    NASA Astrophysics Data System (ADS)

    Karydas, Christos G.; Toukiloglou, Pericles; Minakou, Chara; Gitas, Ioannis Z.

    2015-06-01

    In the framework of ERMES project (FP7 66983), an algorithm for mapping rice cultivation extents using mediumhigh resolution satellite data was developed. ERMES (An Earth obseRvation Model based RicE information Service) aims to develop a prototype of downstream service for rice yield modelling based on a combination of Earth Observation and in situ data. The algorithm was designed as a set of rules applied on a time series of Landsat 8 images, acquired throughout the rice cultivation season of 2014 from the plain of Thessaloniki, Greece. The rules rely on the use of spectral indices, such as the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), and the Normalized Seasonal Wetness Index (NSWI), extracted from the Landsat 8 dataset. The algorithm is subdivided into two phases: a) a hard classification phase, resulting in a binary map (rice/no-rice), where pixels are judged according to their performance in all the images of the time series, while index thresholds were defined after a trial and error approach; b) a soft classification phase, resulting in a fuzzy map, by assigning scores to the pixels which passed (as `rice') the first phase. Finally, a user-defined threshold of the fuzzy score will discriminate rice from no-rice pixels in the output map. The algorithm was tested in a subset of Thessaloniki plain against a set of selected field data. The results indicated an overall accuracy of the algorithm higher than 97%. The algorithm was also applied in a study are in Spain (Valencia) and a preliminary test indicated a similar performance, i.e. about 98%. Currently, the algorithm is being modified, so as to map rice extents early in the cultivation season (by the end of June), with a view to contribute more substantially to the rice yield prediction service of ERMES. Both algorithm modes (late and early) are planned to be tested in extra Mediterranean study areas, in Greece, Italy, and Spain.

  6. A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.

  7. Fourth-order algorithms for solving the imaginary-time Gross-Pitaevskii equation in a rotating anisotropic trap

    SciTech Connect

    Chin, Siu A.; Krotscheck, Eckhard

    2005-09-01

    By implementing the exact density matrix for the rotating anisotropic harmonic trap, we derive a class of very fast and accurate fourth-order algorithms for evolving the Gross-Pitaevskii equation in imaginary time. Such fourth-order algorithms are possible only with the use of forward, positive time step factorization schemes. These fourth-order algorithms converge at time-step sizes an order-of-magnitude larger than conventional second-order algorithms. Our use of time-dependent factorization schemes provides a systematic way of devising algorithms for solving this type of nonlinear equations.

  8. Directed Incremental Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Yang, Guowei; Rungta, Neha; Khurshid, Sarfraz

    2011-01-01

    The last few years have seen a resurgence of interest in the use of symbolic execution -- a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice. In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves -- only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes.

  9. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  10. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements.

    PubMed

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T; Carlson, Thomas J

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia. PMID:27131647

  11. Enhancing Sensitivity of a Miniature Spectrometer Using a Real-Time Image Processing Algorithm.

    PubMed

    Chandramohan, Sabarish; Avrutsky, Ivan

    2016-05-01

    A real-time image processing algorithm is developed to enhance the sensitivity of a planar single-mode waveguide miniature spectrometer with integrated waveguide gratings. A novel approach of averaging along the arcs in a curved coordinate system is introduced which allows for collecting more light, thereby enhancing the sensitivity. The algorithm is tested using CdSeS/ZnS quantum dots drop casted on the surface of a single-mode waveguide. Measurements indicate that a monolayer of quantum dots is expected to produce guided mode attenuation approximately 11 times above the noise level. PMID:27170777

  12. The FPGA realization of a real-time Bayer image restoration algorithm with better performance

    NASA Astrophysics Data System (ADS)

    Ma, Huaping; Liu, Shuang; Zhou, Jiangyong; Tang, Zunlie; Deng, Qilin; Zhang, Hongliu

    2014-11-01

    Along with the wide usage of realizing Bayer color interpolation algorithm through FPGA, better performance, real-time processing, and less resource consumption have become the pursuits for the users. In order to realize the function of high speed and high quality processing of the Bayer image restoration with less resource consumption, the color reconstruction is designed and optimized from the interpolation algorithm and the FPGA realization in this article. Then the hardware realization is finished with FPGA development platform, and the function of real-time and high-fidelity image processing with less resource consumption is realized in the embedded image acquisition systems.

  13. Real time tracking with a silicon telescope prototype using the "artificial retina" algorithm

    NASA Astrophysics Data System (ADS)

    Abba, A.; Bedeschi, F.; Caponio, F.; Cenci, R.; Citterio, M.; Coelli, S.; Fu, J.; Geraci, A.; Grizzuti, M.; Lusardi, N.; Marino, P.; Monti, M.; Morello, M. J.; Neri, N.; Ninci, D.; Petruzzo, M.; Piucci, A.; Punzi, G.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.; Walsh, J.

    2016-07-01

    We present the first prototype of a silicon tracker using the artificial retina algorithm for fast track finding. The algorithm is inspired by the neurobiological mechanism of recognition of edges in mammals visual cortex. It is based on extensive parallelization and is implemented on commercial FPGAs allowing us to reconstruct real time tracks with offline-like quality and < 1 μs latencies. The practical device consists of a telescope with 8 single-sided silicon strip sensors and custom DAQ boards equipped with Xilinx Kintex 7 FPGAs that perform the readout of the sensors and the track reconstruction in real time.

  14. Novel algorithm for coexpression detection in time-varying microarray data sets.

    PubMed

    Yin, Zong-Xian; Chiang, Jung-Hsien

    2008-01-01

    When analyzing the results of microarray experiments, biologists generally use unsupervised categorization tools. However, such tools regard each time point as an independent dimension and utilize the Euclidean distance to compute the similarities between expressions. Furthermore, some of these methods require the number of clusters to be determined in advance, which is clearly impossible in the case of a new dataset. Therefore, this study proposes a novel scheme, designated as the Variation-based Coexpression Detection (VCD) algorithm, to analyze the trends of expressions based on their variation over time. The proposed algorithm has two advantages. First, it is unnecessary to determine the number of clusters in advance since the algorithm automatically detects those genes whose profiles are grouped together and creates patterns for these groups. Second, the algorithm features a new measurement criterion for calculating the degree of change of the expressions between adjacent time points and evaluating their trend similarities. Three real-world microarray datasets are employed to evaluate the performance of the proposed algorithm. PMID:18245881

  15. One-time collision arbitration algorithm in radio-frequency identification based on the Manchester code

    NASA Astrophysics Data System (ADS)

    Liu, Chen-Chung; Chan, Yin-Tsung

    2011-02-01

    In radio-requency identification (RFID) systems, when multiple tags transmit data to a reader simultaneously, these data may collide and create unsuccessful identifications; hence, anticollision algorithms are needed to reduce collisions (collision cycles) to improve the tag identification speed. We propose a one-time collision arbitration algorithm to reduce both the number of collisions and the time consumption for tags' identification in RFID. The proposed algorithm uses Manchester coding to detect the locations of collided bits, uses the divide-and-conquer strategy to find the structure of colliding bits to generate 96-bit query strings as the 96-bit candidate query strings (96BCQSs), and uses query-tree anticollision schemes with 96BCQSs to identify tags. The performance analysis and experimental results show that the proposed algorithm has three advantages: (i) reducing the number of collisions to only one, so that the time complexity of tag identification is the simplest O(1), (ii) storing identified identification numbers (IDs) and the 96BCQSs in a register to save the used memory, and (iii) resulting in the number of bits transmitted by both the reader and tags being evidently less than the other algorithms in one-tag identification or in all tags identification.

  16. Flexible algorithm for real-time convolution supporting dynamic event-related fMRI

    NASA Astrophysics Data System (ADS)

    Eaton, Brent L.; Frank, Randall J.; Bolinger, Lizann; Grabowski, Thomas J.

    2002-04-01

    An efficient algorithm for generation of the task reference function has been developed that allows real-time statistical analysis of fMRI data, within the framework of the general linear model, for experiments with event-related stimulus designs. By leveraging time-stamped data collection in the Input/Output time-aWare Architecture (I/OWA), we detect the onset time of a stimulus as it is delivered to a subject. A dynamically updated list of detected stimulus event times is maintained in shared memory as a data stream and delivered as input to a real-time convolution algorithm. As each image is acquired from the MR scanner, the time-stamp of its acquisition is delivered via a second dynamically updated stream to the convolution algorithm, where a running convolution of the events with an estimated hemodynamic response function is computed at the image acquisition time and written to a third stream in memory. Output is interpreted as the activation reference function and treated as the covariate of interest in the I/OWA implementation of the general linear model. Statistical parametric maps are computed and displayed to the I/OWA user interface in less than the time between successive image acquisitions.

  17. Detrending Algorithms in Large Time Series: Application to TFRM-PSES Data

    NASA Astrophysics Data System (ADS)

    del Ser, D.; Fors, O.; Núñez, J.; Voss, H.; Rosich, A.; Kouprianov, V.

    2015-07-01

    Certain instrumental effects and data reduction anomalies introduce systematic errors in photometric time series. Detrending algorithms such as the Trend Filtering Algorithm (TFA; Kovács et al. 2004) have played a key role in minimizing the effects caused by these systematics. Here we present the results obtained after applying the TFA, Savitzky & Golay (1964) detrending algorithms, and the Box Least Square phase-folding algorithm (Kovács et al. 2002) to the TFRM-PSES data (Fors et al. 2013). Tests performed on these data show that by applying these two filtering methods together the photometric RMS is on average improved by a factor of 3-4, with better efficiency towards brighter magnitudes, while applying TFA alone yields an improvement of a factor 1-2. As a result of this improvement, we are able to detect and analyze a large number of stars per TFRM-PSES field which present some kind of variability. Also, after porting these algorithms to Python and parallelizing them, we have improved, even for large data samples, the computational performance of the overall detrending+BLS algorithm by a factor of ˜10 with respect to Kovács et al. (2004).

  18. Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm

    SciTech Connect

    Thanh, Vo Hong; Priami, Corrado

    2015-08-07

    We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reaction rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.

  19. A Linear Time Algorithm for the Minimum Spanning Caterpillar Problem for Bounded Treewidth Graphs

    NASA Astrophysics Data System (ADS)

    Dinneen, Michael J.; Khosravani, Masoud

    We consider the Minimum Spanning Caterpillar Problem (MSCP) in a graph where each edge has two costs, spine (path) cost and leaf cost, depending on whether it is used as a spine or a leaf edge. The goal is to find a spanning caterpillar in which the sum of its edge costs is the minimum. We show that the problem has a linear time algorithm when a tree decomposition of the graph is given as part of the input. Despite the fast growing constant factor of the time complexity of our algorithm, it is still practical and efficient for some classes of graphs, such as outerplanar, series-parallel (K 4 minor-free), and Halin graphs. We also briefly explain how one can modify our algorithm to solve the Minimum Spanning Ring Star and the Dual Cost Minimum Spanning Tree Problems.

  20. An automatic cloud mask algorithm based on time series of MODIS measurements

    NASA Astrophysics Data System (ADS)

    Lyapustin, A.; Wang, Y.; Frey, R.

    2008-08-01

    Quality of aerosol retrievals and atmospheric correction over land depends strongly on accuracy of the cloud mask (CM) algorithm. The heritage CM algorithms developed for AVHRR and MODIS use the latest sensor measurements of spectral reflectance and brightness temperature and perform processing at the pixel level. The algorithms are threshold-based and empirically tuned. They do not explicitly address the classical problem of cloud search, wherein the baseline clear-skies scene is defined for comparison. Here we report on a new land CM algorithm, which explicitly builds and maintains a reference clear-skies image of the surface (refcm) using a time series of MODIS measurements. The new algorithm, developed as part of the multiangle implementation of atmospheric correction (MAIAC) algorithm for MODIS, relies on the fact that clear-skies images of the same surface area have a common textural pattern, defined by the surface topography, boundaries of rivers and lakes, distribution of soils and vegetation, etc. This pattern changes slowly given the daily rate of global Earth observations, whereas clouds introduce high-frequency random disturbances. Under clear skies, consecutive gridded images of the same surface area have a high covariance, whereas in presence of clouds covariance is usually low. This idea is central to initialization of refcm, which is used to derive cloud mask in combination with spectral and brightness temperature tests. The refcm is continuously updated with the latest clear-skies MODIS measurements, thus adapting to seasonal and rapid surface changes. The algorithm is enhanced by an internal dynamic land-water-snow classification coupled with a surface change mask. An initial comparison shows that the new algorithm offers the potential to perform better than the MODIS MOD35 cloud mask in situations where the land surface is changing rapidly and over Earth regions covered by snow and ice.

  1. [A study for time-history waveform synthesis of algorithm in shock response spectrum (SRS)].

    PubMed

    Liu, Hong-ying; Ma, Ai-jun

    2002-12-01

    Objective. To present an effective on-line SRS time-history waveform synthesis method for simulating pyrotechnic shock environment with electrodynamic shakers. Method. A procedure was developed for synthesizing a SRS time-history waveform according to a general principle. The effect of three main parameters to waveform's shape, amplitude of acceleration and duration were investigated. A modification method of SRS's amplitude and an optimal algorithm of time-history waveform were presented. Result. The algorithm was used to generate a time-history waveform that could satisfy SRS's accuracy requirement and electrodynamic shaker's acceleration limitation. Conclusion. The numerical example indicates that the developed method is effective. The synthesized time-history waveform can be used to simulate pyrotechnic shock environment using electrodynamic shakers. PMID:12622083

  2. Is it time to pull the plug on 12-hour shifts?: Part 2. Barriers to change and executive leadership strategies.

    PubMed

    Lothschuetz Montgomery, Kathryn; Geiger-Brown, Jeanne

    2010-04-01

    This article is part 2 of the series "Pulling the Plug on 12-Hour Shifts." In part 1 (March 2010), the authors provided an update on recent evidence that challenges the current scheduling paradigm that supports the lack of safety of long work hours. Part 2 describes the barriers to change and challenges for the nurse executive in moving away from the practice of 12-hour shifts. This is an executive-level analysis of barriers and recommends strategies for change. Translation of evidence into administrative practice requires examination of external environmental factors, internal system consequences, organizational culture, and measures of executive performance. PMID:20305457

  3. Software algorithm and hardware design for real-time implementation of new spectral estimator

    PubMed Central

    2014-01-01

    Background Real-time spectral analyzers can be difficult to implement for PC computer-based systems because of the potential for high computational cost, and algorithm complexity. In this work a new spectral estimator (NSE) is developed for real-time analysis, and compared with the discrete Fourier transform (DFT). Method Clinical data in the form of 216 fractionated atrial electrogram sequences were used as inputs. The sample rate for acquisition was 977 Hz, or approximately 1 millisecond between digital samples. Real-time NSE power spectra were generated for 16,384 consecutive data points. The same data sequences were used for spectral calculation using a radix-2 implementation of the DFT. The NSE algorithm was also developed for implementation as a real-time spectral analyzer electronic circuit board. Results The average interval for a single real-time spectral calculation in software was 3.29 μs for NSE versus 504.5 μs for DFT. Thus for real-time spectral analysis, the NSE algorithm is approximately 150× faster than the DFT. Over a 1 millisecond sampling period, the NSE algorithm had the capability to spectrally analyze a maximum of 303 data channels, while the DFT algorithm could only analyze a single channel. Moreover, for the 8 second sequences, the NSE spectral resolution in the 3-12 Hz range was 0.037 Hz while the DFT spectral resolution was only 0.122 Hz. The NSE was also found to be implementable as a standalone spectral analyzer board using approximately 26 integrated circuits at a cost of approximately $500. The software files used for analysis are included as a supplement, please see the Additional files 1 and 2. Conclusions The NSE real-time algorithm has low computational cost and complexity, and is implementable in both software and hardware for 1 millisecond updates of multichannel spectra. The algorithm may be helpful to guide radiofrequency catheter ablation in real time. PMID:24886214

  4. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  5. Algorithms for real-time fault detection of the Space Shuttle Main Engine

    NASA Astrophysics Data System (ADS)

    Ruiz, C. A.; Hawman, M. W.; Galinaitis, W. S.

    1992-07-01

    This paper reports on the results of a program to develop and demonstrate concepts related to a realtime health management system (HMS) for the Space Shuttle Main Engine (SSME). An HMS framework was developed on the basis of a top-down analysis of the current rocket engine failure modes and the engine monitoring requirements. One result of Phase I of this program was the identification of algorithmic approaches for detecting failures of the SSME. Three different analytical techniques were developed which demonstrated the capability to detect failures significantly earlier than the existing redlines. Based on promising initial results, Phase II of the program was initiated to further validate and refine the fault detection strategy on a large data base of 140 SSME test firings, and implement the resultant algorithms in real time. The paper begins with an overview of the refined algorithms used to detect failures during SSME start-up and main-stage operation. Results of testing these algorithms on a data base of nominal and off-nominal SSME test firings is discussed. The paper concludes with a discussion of the performance of the algorithms operating on a real-time computer system.

  6. Algorithms for real-time fault detection of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Ruiz, C. A.; Hawman, M. W.; Galinaitis, W. S.

    1992-01-01

    This paper reports on the results of a program to develop and demonstrate concepts related to a realtime health management system (HMS) for the Space Shuttle Main Engine (SSME). An HMS framework was developed on the basis of a top-down analysis of the current rocket engine failure modes and the engine monitoring requirements. One result of Phase I of this program was the identification of algorithmic approaches for detecting failures of the SSME. Three different analytical techniques were developed which demonstrated the capability to detect failures significantly earlier than the existing redlines. Based on promising initial results, Phase II of the program was initiated to further validate and refine the fault detection strategy on a large data base of 140 SSME test firings, and implement the resultant algorithms in real time. The paper begins with an overview of the refined algorithms used to detect failures during SSME start-up and main-stage operation. Results of testing these algorithms on a data base of nominal and off-nominal SSME test firings is discussed. The paper concludes with a discussion of the performance of the algorithms operating on a real-time computer system.

  7. Validation of Learning Effort Algorithm for Real-Time Non-Interfering Based Diagnostic Technique

    ERIC Educational Resources Information Center

    Hsu, Pi-Shan; Chang, Te-Jeng

    2011-01-01

    The objective of this research is to validate the algorithm of learning effort which is an indicator of a new real-time and non-interfering based diagnostic technique. IC3 Mentor, the adaptive e-learning platform fulfilling the requirements of intelligent tutor system, was applied to 165 university students. The learning records of the subjects…

  8. Algorithmic improvements to the real-time implementation of a synthetic aperture sonar beam former

    NASA Astrophysics Data System (ADS)

    Freeman, Douglas K.

    1997-07-01

    Coastal Systems Station has translated its synthetic aperture sonar beamformer from linear processing to parallel processing. The initial implementation included many linear processes delegated to individual processors and neglected algorithmic refinements available to parallel processing. The steps taken to achieve increased computational speed for real-time beam forming are presented.

  9. Image/Time Series Mining Algorithms: Applications to Developmental Biology, Document Processing and Data Streams

    ERIC Educational Resources Information Center

    Tataw, Oben Moses

    2013-01-01

    Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…

  10. Maximum Principles and Application to the Analysis of An Explicit Time Marching Algorithm

    NASA Technical Reports Server (NTRS)

    LeTallec, Patrick; Tidriri, Moulay D.

    1996-01-01

    In this paper we develop local and global estimates for the solution of convection-diffusion problems. We then study the convergence properties of a Time Marching Algorithm solving Advection-Diffusion problems on two domains using incompatible discretizations. This study is based on a De-Giorgi-Nash maximum principle.

  11. Time-varying modal parameters identification of a spacecraft with rotating flexible appendage by recursive algorithm

    NASA Astrophysics Data System (ADS)

    Ni, Zhiyu; Mu, Ruinan; Xun, Guangbin; Wu, Zhigang

    2016-01-01

    The rotation of spacecraft flexible appendage may cause changes in modal parameters. For this time-varying system, the computation cost of the frequently-used singular value decomposition (SVD) identification method is high. Some control problems, such as the self-adaptive control, need the latest modal parameters to update the controller parameters in time. In this paper, the projection approximation subspace tracking (PAST) recursive algorithm is applied as an alternative method to identify the time-varying modal parameters. This method avoids the SVD by signal subspace projection and improves the computational efficiency. To verify the ability of this recursive algorithm in spacecraft modal parameters identification, a spacecraft model with rapid rotational appendage, Soil Moisture Active/Passive (SMAP) satellite, is established, and the time-varying modal parameters of the satellite are identified recursively by designing the input and output signals. The results illustrate that this recursive algorithm can obtain the modal parameters in the high signal noise ratio (SNR) and it has better computational efficiency than the SVD method. Moreover, to improve the identification precision of this recursive algorithm in the low SNR, the wavelet de-noising technology is used to decrease the effect of noises.

  12. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    NASA Technical Reports Server (NTRS)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  13. High-dimensional propensity score algorithm in comparative effectiveness research with time-varying interventions.

    PubMed

    Neugebauer, Romain; Schmittdiel, Julie A; Zhu, Zheng; Rassen, Jeremy A; Seeger, John D; Schneeweiss, Sebastian

    2015-02-28

    The high-dimensional propensity score (hdPS) algorithm was proposed for automation of confounding adjustment in problems involving large healthcare databases. It has been evaluated in comparative effectiveness research (CER) with point treatments to handle baseline confounding through matching or covariance adjustment on the hdPS. In observational studies with time-varying interventions, such hdPS approaches are often inadequate to handle time-dependent confounding and selection bias. Inverse probability weighting (IPW) estimation to fit marginal structural models can adequately handle these biases under the fundamental assumption of no unmeasured confounders. Upholding of this assumption relies on the selection of an adequate set of covariates for bias adjustment. We describe the application and performance of the hdPS algorithm to improve covariate selection in CER with time-varying interventions based on IPW estimation and explore stabilization of the resulting estimates using Super Learning. The evaluation is based on both the analysis of electronic health records data in a real-world CER study of adults with type 2 diabetes and a simulation study. This report (i) establishes the feasibility of IPW estimation with the hdPS algorithm based on large electronic health records databases, (ii) demonstrates little impact on inferences when supplementing the set of expert-selected covariates using the hdPS algorithm in a setting with extensive background knowledge, (iii) supports the application of the hdPS algorithm in discovery settings with little background knowledge or limited data availability, and (iv) motivates the application of Super Learning to stabilize effect estimates based on the hdPS algorithm. PMID:25488047

  14. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  15. Improve the algorithmic performance of collaborative filtering by using the interevent time distribution of human behaviors

    NASA Astrophysics Data System (ADS)

    Jia, Chun-Xiao; Liu, Run-Ran

    2015-10-01

    Recently, many scaling laws of interevent time distribution of human behaviors are observed and some quantitative understanding of human behaviors are also provided by researchers. In this paper, we propose a modified collaborative filtering algorithm by making use the scaling law of human behaviors for information filtering. Extensive experimental analyses demonstrate that the accuracies on MovieLensand Last.fm datasets could be improved greatly, compared with the standard collaborative filtering. Surprisingly, further statistical analyses suggest that the present algorithm could simultaneously improve the novelty and diversity of recommendations. This work provides a creditable way for highly efficient information filtering.

  16. Identification of continuous-time dynamical systems: Neural network based algorithms and parallel implementation

    SciTech Connect

    Farber, R.M.; Lapedes, A.S.; Rico-Martinez, R.; Kevrekidis, I.G.

    1993-06-01

    Time-delay mappings constructed using neural networks have proven successful performing nonlinear system identification; however, because of their discrete nature, their use in bifurcation analysis of continuous-tune systems is limited. This shortcoming can be avoided by embedding the neural networks in a training algorithm that mimics a numerical integrator. Both explicit and implicit integrators can be used. The former case is based on repeated evaluations of the network in a feedforward implementation; the latter relies on a recurrent network implementation. Here the algorithms and their implementation on parallel machines (SIMD and MIMD architectures) are discussed.

  17. Identification of continuous-time dynamical systems: Neural network based algorithms and parallel implementation

    SciTech Connect

    Farber, R.M.; Lapedes, A.S. ); Rico-Martinez, R.; Kevrekidis, I.G. . Dept. of Chemical Engineering)

    1993-01-01

    Time-delay mappings constructed using neural networks have proven successful performing nonlinear system identification; however, because of their discrete nature, their use in bifurcation analysis of continuous-tune systems is limited. This shortcoming can be avoided by embedding the neural networks in a training algorithm that mimics a numerical integrator. Both explicit and implicit integrators can be used. The former case is based on repeated evaluations of the network in a feedforward implementation; the latter relies on a recurrent network implementation. Here the algorithms and their implementation on parallel machines (SIMD and MIMD architectures) are discussed.

  18. Identifying waking time in 24-h accelerometry data in adults using an automated algorithm.

    PubMed

    van der Berg, Julianne D; Willems, Paul J B; van der Velde, Jeroen H P M; Savelberg, Hans H C M; Schaper, Nicolaas C; Schram, Miranda T; Sep, Simone J S; Dagnelie, Pieter C; Bosma, Hans; Stehouwer, Coen D A; Koster, Annemarie

    2016-10-01

    As accelerometers are commonly used for 24-h measurements of daily activity, methods for separating waking from sleeping time are necessary for correct estimations of total daily activity levels accumulated during the waking period. Therefore, an algorithm to determine wake and bed times in 24-h accelerometry data was developed and the agreement of this algorithm with self-report was examined. One hundred seventy-seven participants (aged 40-75 years) of The Maastricht Study who completed a diary and who wore the activPAL3™ 24 h/day, on average 6 consecutive days were included. Intraclass correlation coefficient (ICC) was calculated and the Bland-Altman method was used to examine associations between the self-reported and algorithm-calculated waking hours. Mean self-reported waking hours was 15.8 h/day, which was significantly correlated with the algorithm-calculated waking hours (15.8 h/day, ICC = 0.79, P = < 0.001). The Bland-Altman plot indicated good agreement in waking hours as the mean difference was 0.02 h (95% limits of agreement (LoA) = -1.1 to 1.2 h). The median of the absolute difference was 15.6 min (Q1-Q3 = 7.6-33.2 min), and 71% of absolute differences was less than 30 min. The newly developed automated algorithm to determine wake and bed times was highly associated with self-reported times, and can therefore be used to identify waking time in 24-h accelerometry data in large-scale epidemiological studies. PMID:26837855

  19. Dynamic acoustics for the STAR-100. [computer algorithms for time dependent sound waves in jet

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Turkel, E.

    1979-01-01

    An algorithm is described to compute time dependent acoustic waves in a jet. The method differs from previous methods in that no harmonic time dependence is assumed, thus permitting the study of nonharmonic acoustical behavior. Large grids are required to resolve the acoustic waves. Since the problem is nonstiff, explicit high order schemes can be used. These have been adapted to the STAR-100 with great efficiencies and permitted the efficient solution of problems which would not be feasible on a scalar machine.

  20. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  1. A Solution Method of Job-shop Scheduling Problems by the Idle Time Shortening Type Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Ida, Kenichi; Osawa, Akira

    In this paper, we propose a new idle time shortening method for Job-shop scheduling problems (JSPs). We insert its method into a genetic algorithm (GA). The purpose of JSP is to find a schedule with the minimum makespan. We suppose that it is effective to reduce idle time of a machine in order to improve the makespan. The left shift is a famous algorithm in existing algorithms for shortening idle time. The left shift can not arrange the work to idle time. For that reason, some idle times are not shortened by the left shift. We propose two kinds of algorithms which shorten such idle time. Next, we combine these algorithms and the reversal of a schedule. We apply GA with its algorithm to benchmark problems and we show its effectiveness.

  2. Performance assessment of an algorithm for the alignment of fMRI time series.

    PubMed

    Ciulla, Carlo; Deek, Fadi P

    2002-01-01

    This paper reports on performance assessment of an algorithm developed to align functional Magnetic Resonance Image (fMRI) time series. The algorithm is based on the assumption that the human brain is subject to rigid-body motion and has been devised by pipelining fiducial markers and tensor based registration methodologies. Feature extraction is performed on each fMRI volume to determine tensors of inertia and gradient image of the brain. A head coordinate system is determined on the basis of three fiducial markers found automatically at the head boundary by means of the tensors and is used to compute a point-based rigid matching transformation. Intensity correction is performed with sub-voxel accuracy by trilinear interpolation. Performance of the algorithm was preliminarily assessed by fMR brain images in which controlled motion has been simulated. Further experimentation has been conducted with real fMRI time series. Rigid-body transformations were retrieved automatically and the value of motion parameters compared to those obtained with the Statistical Parametric Mapping (SPM99) and the Automatic Image Registration (AIR 3.08). Results indicate that the algorithm offers sub-voxel accuracy in performing both misalignment and intensity correction of fMRI time series. PMID:12137364

  3. Online learning algorithm for time series forecasting suitable for low cost wireless sensor networks nodes.

    PubMed

    Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma

    2015-01-01

    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources. PMID:25905698

  4. Closed form and geometric algorithms for real-time control of an avatar

    SciTech Connect

    Semwall, S.K.; Hightower, R.; Stansfield, S.

    1995-12-31

    In a virtual environment with multiple participants, it is necessary that the user`s actions be replicated by synthetic human forms. Whole body digitizers would be the most realistic solution for capturing the individual participant`s human form, however the best of the digitizers available are not interactive and are therefore not suitable for real-time interaction. Usually, a limited number of sensors are used as constraints on the synthetic human form. Inverse kinematics algorithms are applied to satisfy these sensor constraints. These algorithms result in slower interaction because of their iterative nature, especially when there are a large number of participants. To support real-time interaction in a virtual environment, there is a need to generate closed for solutions and fast searching algorithms. In this paper, a new closed form solution for the arms (and legs) is developed using two magnetic sensors. In developing this solution, we use the biomechanical relationship between the lower arm and the upper arm to provide an analytical, non-iterative solution, We have also outlined a solution for the whole human body by using up to ten magnetic sensors to break the human skeleton into smaller kinematic chains. In developing our algorithms, we use the knowledge of natural body postures to generate faster solutions for real-time interaction.

  5. Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes

    PubMed Central

    Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma

    2015-01-01

    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources. PMID:25905698

  6. 36 CFR 51.34 - What will the Director do if a selected preferred offeror does not timely execute the new...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false What will the Director do if a selected preferred offeror does not timely execute the new concession contract? 51.34 Section 51... CONTRACTS Right of Preference to a New Concession Contract § 51.34 What will the Director do if a...

  7. Space-Time Areal Mixture Model: Relabeling Algorithm and Model Selection Issues.

    PubMed

    Hossain, M M; Lawson, A B; Cai, B; Choi, J; Liu, J; Kirby, R S

    2014-03-01

    With the growing popularity of spatial mixture models in cluster analysis, model selection criteria have become an established tool in the search for parsimony. However, the label-switching problem is often inherent in Bayesian implementation of mixture models and a variety of relabeling algorithms have been proposed. We use a space-time mixture of Poisson regression models with homogeneous covariate effects to illustrate that the best model selected by using model selection criteria does not always support the model that is chosen by the optimal relabeling algorithm. The results are illustrated for real and simulated datasets. The objective is to make the reader aware that if the purpose of statistical modeling is to identify clusters, applying a relabeling algorithm to the model with the best fit may not generate the optimal relabeling. PMID:25221430

  8. Space-Time Areal Mixture Model: Relabeling Algorithm and Model Selection Issues

    PubMed Central

    Hossain, M.M.; Lawson, A.B.; Cai, B.; Choi, J.; Liu, J.; Kirby, R. S.

    2014-01-01

    With the growing popularity of spatial mixture models in cluster analysis, model selection criteria have become an established tool in the search for parsimony. However, the label-switching problem is often inherent in Bayesian implementation of mixture models and a variety of relabeling algorithms have been proposed. We use a space-time mixture of Poisson regression models with homogeneous covariate effects to illustrate that the best model selected by using model selection criteria does not always support the model that is chosen by the optimal relabeling algorithm. The results are illustrated for real and simulated datasets. The objective is to make the reader aware that if the purpose of statistical modeling is to identify clusters, applying a relabeling algorithm to the model with the best fit may not generate the optimal relabeling. PMID:25221430

  9. Near linear time algorithm to detect community structures in large-scale networks

    NASA Astrophysics Data System (ADS)

    Raghavan, Usha Nandini; Albert, Réka; Kumara, Soundar

    2007-09-01

    Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.

  10. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    PubMed Central

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  11. An implementation of the Gillespie algorithm for RNA kinetics with logarithmic time update

    PubMed Central

    Dykeman, Eric C.

    2015-01-01

    In this paper I outline a fast method called KFOLD for implementing the Gillepie algorithm to stochastically sample the folding kinetics of an RNA molecule at single base-pair resolution. In the same fashion as the KINFOLD algorithm, which also uses the Gillespie algorithm to predict folding kinetics, KFOLD stochastically chooses a new RNA secondary structure state that is accessible from the current state by a single base-pair addition/deletion following the Gillespie procedure. However, unlike KINFOLD, the KFOLD algorithm utilizes the fact that many of the base-pair addition/deletion reactions and their corresponding rates do not change between each step in the algorithm. This allows KFOLD to achieve a substantial speed-up in the time required to compute a prediction of the folding pathway and, for a fixed number of base-pair moves, performs logarithmically with sequence size. This increase in speed opens up the possibility of studying the kinetics of much longer RNA sequences at single base-pair resolution while also allowing for the RNA folding statistics of smaller RNA sequences to be computed much more quickly. PMID:25990741

  12. Constant-thrust glideslope guidance algorithm for time-fixed rendezvous in real halo orbit

    NASA Astrophysics Data System (ADS)

    Lian, Yijun; Meng, Yunhe; Tang, Guojian; Liu, Luhua

    2012-10-01

    This paper presents a fixed-time glideslope guidance algorithm that is capable of guiding the spacecraft approaching a target vehicle on a quasi-periodic halo orbit in real Earth-Moon system. To guarantee the flight time is fixed, a novel strategy for designing the parameters of the algorithm is given. Based on the numerical solution of the linearized relative dynamics of the Restricted Three-Body Problem (expressed in inertial coordinates with a time-variant nature), the proposed algorithm breaks down the whole rendezvous trajectory into several arcs. For each arc, a two-impulse transfer is employed to obtain the velocity increment (delta-v) at the joint between arcs. Here we respect the fact that instantaneous delta-v cannot be implemented by any real engine, since the thrust magnitude is always finite. To diminish its effect on the control, a thrust duration as well as a thrust direction are translated from the delta-v in the context of a constant thrust engine (the most robust type in real applications). Furthermore, the ignition and cutoff delays of the thruster are considered as well. With this high-fidelity thrust model, the relative state is then propagated to the next arc by numerical integration using a complete Solar System model. In the end, final corrective control is applied to insure the rendezvous velocity accuracy. To fully validate the proposed guidance algorithm, Monte Carlo simulation is done by incorporating the navigational error and the thrust direction error. Results show that our algorithm can effectively maintain control over the time-fixed rendezvous transfer, with satisfactory final position and velocity accuracies for the near-range guided phase.

  13. An efficient and robust algorithm for two dimensional time dependent incompressible Navier-Stokes equations - High Reynolds number flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1991-01-01

    An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.

  14. An efficient and robust algorithm for two dimensional time dependent incompressible Navier-Stokes equations: High Reynolds number flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1991-01-01

    An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.

  15. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  16. A Stochastic, Resonance-Free Multiple Time-Step Algorithm for Polarizable Models That Permits Very Large Time Steps.

    PubMed

    Margul, Daniel T; Tuckerman, Mark E

    2016-05-10

    Molecular dynamics remains one of the most widely used computational tools in the theoretical molecular sciences to sample an equilibrium ensemble distribution and/or to study the dynamical properties of a system. The efficiency of a molecular dynamics calculation is limited by the size of the time step that can be employed, which is dictated by the highest frequencies in the system. However, many properties of interest are connected to low-frequency, long time-scale phenomena, requiring many small time steps to capture. This ubiquitous problem can be ameliorated by employing multiple time-step algorithms, which assign different time steps to forces acting on different time scales. In such a scheme, fast forces are evaluated more frequently than slow forces, and as the former are often computationally much cheaper to evaluate, the savings can be significant. Standard multiple time-step approaches are limited, however, by resonance phenomena, wherein motion on the fastest time scales limits the step sizes that can be chosen for the slower time scales. In atomistic models of biomolecular systems, for example, the largest time step is typically limited to around 5 fs. Previously, we introduced an isokinetic extended phase-space algorithm (Minary et al. Phys. Rev. Lett. 2004, 93, 150201) and its stochastic analog (Leimkuhler et al. Mol. Phys. 2013, 111, 3579) that eliminate resonance phenomena through a set of kinetic energy constraints. In simulations of a fixed-charge flexible model of liquid water, for example, the time step that could be assigned to the slow forces approached 100 fs. In this paper, we develop a stochastic isokinetic algorithm for multiple time-step molecular dynamics calculations using a polarizable model based on fluctuating dipoles. The scheme developed here employs two sets of induced dipole moments, specifically, those associated with short-range interactions and those associated with a full set of interactions. The scheme is demonstrated on

  17. A genetic algorithm for dynamic inbound ordering and outbound dispatching problem with delivery time windows

    NASA Astrophysics Data System (ADS)

    Kim, Byung Soo; Lee, Woon-Seek; Koh, Shiegheun

    2012-07-01

    This article considers an inbound ordering and outbound dispatching problem for a single product in a third-party warehouse, where the demands are dynamic over a discrete and finite time horizon, and moreover, each demand has a time window in which it must be satisfied. Replenishing orders are shipped in containers and the freight cost is proportional to the number of containers used. The problem is classified into two cases, i.e. non-split demand case and split demand case, and a mathematical model for each case is presented. An in-depth analysis of the models shows that they are very complicated and difficult to find optimal solutions as the problem size becomes large. Therefore, genetic algorithm (GA) based heuristic approaches are designed to solve the problems in a reasonable time. To validate and evaluate the algorithms, finally, some computational experiments are conducted.

  18. Iris unwrapping using the Bresenham circle algorithm for real-time iris recognition

    NASA Astrophysics Data System (ADS)

    Carothers, Matthew T.; Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.

    2015-02-01

    An efficient parallel architecture design for the iris unwrapping process in a real-time iris recognition system using the Bresenham Circle Algorithm is presented in this paper. Based on the characteristics of the model parameters this algorithm was chosen over the widely used polar conversion technique as the iris unwrapping model. The architecture design is parallelized to increase the throughput of the system and is suitable for processing an inputted image size of 320 × 240 pixels in real-time using Field Programmable Gate Array (FPGA) technology. Quartus software is used to implement, verify, and analyze the design's performance using the VHSIC Hardware Description Language. The system's predicted processing time is faster than the modern iris unwrapping technique used today∗.

  19. Associations Between Physical Performance and Executive Function in Older Adults With Mild Cognitive Impairment: Gait Speed and the Timed “Up & Go” Test

    PubMed Central

    Kelly, Valerie E.; Logsdon, Rebecca G.; McCurry, Susan M.; Cochrane, Barbara B.; Engel, Joyce M.; Teri, Linda

    2011-01-01

    Background Older adults with amnestic mild cognitive impairment (aMCI) are at higher risk for developing Alzheimer disease. Physical performance decline on gait and mobility tasks in conjunction with executive dysfunction has implications for accelerated functional decline, disability, and institutionalization in sedentary older adults with aMCI. Objectives The purpose of this study was to examine whether performance on 2 tests commonly used by physical therapists (usual gait speed and Timed “Up & Go” Test [TUG]) are associated with performance on 2 neuropsychological tests of executive function (Trail Making Test, part B [TMT-B], and Stroop-Interference, calculated from the Stroop Word Color Test) in sedentary older adults with aMCI. Design The study was a cross-sectional analysis of 201 sedentary older adults with memory impairment participating in a longitudinal intervention study of cognitive function, aging, exercise, and health promotion. Methods Physical performance speed on gait and mobility tasks was measured via usual gait speed and the TUG (at fast pace). Executive function was measured with the TMT-B and Stroop-Interference measures. Results Applying multiple linear regression, usual gait speed was associated with executive function on both the TMT-B (β=−0.215, P=.003) and Stroop-Interference (β=−0.195, P=.01) measures, indicating that slower usual gait speed was associated with lower executive function performance. Timed “Up & Go” Test scores (in logarithmic transformation) also were associated with executive function on both the TMT-B (β=0.256, P<.001) and Stroop-Interference (β=0.228, P=.002) measures, indicating that a longer time on the TUG was associated with lower executive function performance. All associations remained statistically significant after adjusting for age, sex, depressive symptoms, medical comorbidity, and body mass index. Limitations The cross-sectional nature of this study does not allow for inferences of

  20. A Fast Density-Based Clustering Algorithm for Real-Time Internet of Things Stream

    PubMed Central

    Ying Wah, Teh

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753

  1. A fast density-based clustering algorithm for real-time Internet of Things stream.

    PubMed

    Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753

  2. Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas

    SciTech Connect

    Cohen, B I; Dimits, A; Friedman, A; Caflisch, R

    2009-10-29

    The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.

  3. A Real-Time Algorithm for the Approximation of Level-Set-Based Curve Evolution

    PubMed Central

    Shi, Yonggang; Karl, William Clem

    2010-01-01

    In this paper, we present a complete and practical algorithm for the approximation of level-set-based curve evolution suitable for real-time implementation. In particular, we propose a two-cycle algorithm to approximate level-set-based curve evolution without the need of solving partial differential equations (PDEs). Our algorithm is applicable to a broad class of evolution speeds that can be viewed as composed of a data-dependent term and a curve smoothness regularization term. We achieve curve evolution corresponding to such evolution speeds by separating the evolution process into two different cycles: one cycle for the data-dependent term and a second cycle for the smoothness regularization. The smoothing term is derived from a Gaussian filtering process. In both cycles, the evolution is realized through a simple element switching mechanism between two linked lists, that implicitly represents the curve using an integer valued level-set function. By careful construction, all the key evolution steps require only integer operations. A consequence is that we obtain significant computation speedups compared to exact PDE-based approaches while obtaining excellent agreement with these methods for problems of practical engineering interest. In particular, the resulting algorithm is fast enough for use in real-time video processing applications, which we demonstrate through several image segmentation and video tracking experiments. PMID:18390371

  4. The design and hardware implementation of a low-power real-time seizure detection algorithm

    NASA Astrophysics Data System (ADS)

    Raghunathan, Shriram; Gupta, Sumeet K.; Ward, Matthew P.; Worth, Robert M.; Roy, Kaushik; Irazoqui, Pedro P.

    2009-10-01

    Epilepsy affects more than 1% of the world's population. Responsive neurostimulation is emerging as an alternative therapy for the 30% of the epileptic patient population that does not benefit from pharmacological treatment. Efficient seizure detection algorithms will enable closed-loop epilepsy prostheses by stimulating the epileptogenic focus within an early onset window. Critically, this is expected to reduce neuronal desensitization over time and lead to longer-term device efficacy. This work presents a novel event-based seizure detection algorithm along with a low-power digital circuit implementation. Hippocampal depth-electrode recordings from six kainate-treated rats are used to validate the algorithm and hardware performance in this preliminary study. The design process illustrates crucial trade-offs in translating mathematical models into hardware implementations and validates statistical optimizations made with empirical data analyses on results obtained using a real-time functioning hardware prototype. Using quantitatively predicted thresholds from the depth-electrode recordings, the auto-updating algorithm performs with an average sensitivity and selectivity of 95.3 ± 0.02% and 88.9 ± 0.01% (mean ± SEα = 0.05), respectively, on untrained data with a detection delay of 8.5 s [5.97, 11.04] from electrographic onset. The hardware implementation is shown feasible using CMOS circuits consuming under 350 nW of power from a 250 mV supply voltage from simulations on the MIT 180 nm SOI process.

  5. Multiprocessor execution of functional programs

    SciTech Connect

    Goldberg, B.F.

    1988-01-01

    Functional languages have recently gained attention as vehicles for programming in a concise and elegant manner. In addition, it has been suggested that functional programming provides a natural methodology for programming multiprocessor computers. This dissertation demonstrates that multiprocessor execution of functional programs is feasible, and results in a significant reduction in their execution times. Two implementations of the functional language ALFL were built on commercially available multiprocessors. ALFL is an implementation on the Intel iPSC hypercube multiprocessor, and Buckwheat is an implementation on the Encore Multimax shared-memory multiprocessor. Each implementation includes a compiler that performs automatic decomposition of ALFL programs. The compiler is responsible for detecting the inherent parallelism in a program, and decomposing the program into a collection of tasks, called serial combinators, that can be executed in parallel. One of the primary goals of the compiler is to generate serial combinators exhibiting the coarsest granularity possibly without sacrificing useful parallelism. This dissertation describes the algorithms used by the compiler to analyze, decompose, and optimize functional programs. The abstract machine model supported by Alfalfa and Buckwheat is called heterogeneous graph reduction, which is a hybrid of graph reduction and conventional stack-oriented execution. This model supports parallelism, lazy evaluation, and higher order functions while at the same time making efficient use of the processors in the system. The Alfalfa and Buckwheat run-time systems support dynamic load balancing, interprocessor communication (if required) and storage management. A large number of experiments were performed on Alfalfa and Buckwheat for a variety of programs. The results of these experiments, as well as the conclusions drawn from them, are presented.

  6. Hardware Implementation of a Spline-Based Genetic Algorithm for Embedded Stereo Vision Sensor Providing Real-Time Visual Guidance to the Visually Impaired

    NASA Astrophysics Data System (ADS)

    Lee, Dah-Jye; Anderson, Jonathan D.; Archibald, James K.

    2008-12-01

    Many image and signal processing techniques have been applied to medical and health care applications in recent years. In this paper, we present a robust signal processing approach that can be used to solve the correspondence problem for an embedded stereo vision sensor to provide real-time visual guidance to the visually impaired. This approach is based on our new one-dimensional (1D) spline-based genetic algorithm to match signals. The algorithm processes image data lines as 1D signals to generate a dense disparity map, from which 3D information can be extracted. With recent advances in electronics technology, this 1D signal matching technique can be implemented and executed in parallel in hardware such as field-programmable gate arrays (FPGAs) to provide real-time feedback about the environment to the user. In order to complement (not replace) traditional aids for the visually impaired such as canes and Seeing Eyes dogs, vision systems that provide guidance to the visually impaired must be affordable, easy to use, compact, and free from attributes that are awkward or embarrassing to the user. "Seeing Eye Glasses," an embedded stereo vision system utilizing our new algorithm, meets all these requirements.

  7. The impact of reconstruction algorithms and time of flight information on PET/CT image quality

    PubMed Central

    Suljic, Alen; Tomse, Petra; Jensterle, Luka; Skrk, Damijan

    2015-01-01

    Background The aim of the study was to explore the influence of various time-of-flight (TOF) and non-TOF reconstruction algorithms on positron emission tomography/computer tomography (PET/CT) image quality. Materials and methods. Measurements were performed with a triple line source phantom, consisting of capillaries with internal diameter of ∼ 1 mm and standard Jaszczak phantom. Each of the data sets was reconstructed using analytical filtered back projection (FBP) algorithm, iterative ordered subsets expectation maximization (OSEM) algorithm (4 iterations, 24 subsets) and iterative True-X algorithm incorporating a specific point spread function (PSF) correction (4 iterations, 21 subsets). Baseline OSEM (2 iterations, 8 subsets) was included for comparison. Procedures were undertaken following the National Electrical Manufacturers Association (NEMA) NU-2-2001 protocol. Results Measurement of spatial resolution in full width at half maximum (FWHM) was 5.2 mm, 4.5 mm and 2.9 mm for FBP, OSEM and True-X; and 5.1 mm, 4.5 mm and 2.9 mm for FBP+TOF, OSEM+TOF and True-X+TOF respectively. Assessment of reconstructed Jaszczak images at different concentration ratios showed that incorporation of TOF information improves cold contrast, while hot contrast only slightly, however the most prominent improvement could be seen in background variability - noise reduction. Conclusions On the basis of the results of investigation we concluded, that incorporation of TOF information in reconstruction algorithm mostly affects reduction of the background variability (levels of noise in the image), while the improvement of spatial resolution due to incorporation of TOF information is negligible. Comparison of traditional and modern reconstruction algorithms showed that analytical FBP yields comparable results in some parameter measurements, such as cold contrast and relative count error. Iterative methods show highest levels of hot contrast, when TOF and PSF corrections were applied

  8. A multiple time stepping algorithm for efficient multiscale modeling of platelets flowing in blood plasma

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny

    2015-03-01

    We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3-4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations.

  9. Improving Computational Efficiency of Model Predictive Control Genetic Algorithms for Real-Time Decision Support

    NASA Astrophysics Data System (ADS)

    Minsker, B. S.; Zimmer, A. L.; Ostfeld, A.; Schmidt, A.

    2014-12-01

    Enabling real-time decision support, particularly under conditions of uncertainty, requires computationally efficient algorithms that can rapidly generate recommendations. In this paper, a suite of model predictive control (MPC) genetic algorithms are developed and tested offline to explore their value for reducing CSOs during real-time use in a deep-tunnel sewer system. MPC approaches include the micro-GA, the probability-based compact GA, and domain-specific GA methods that reduce the number of decision variable values analyzed within the sewer hydraulic model, thus reducing algorithm search space. Minimum fitness and constraint values achieved by all GA approaches, as well as computational times required to reach the minimum values, are compared to large population sizes with long convergence times. Optimization results for a subset of the Chicago combined sewer system indicate that genetic algorithm variations with coarse decision variable representation, eventually transitioning to the entire range of decision variable values, are most efficient at addressing the CSO control problem. Although diversity-enhancing micro-GAs evaluate a larger search space and exhibit shorter convergence times, these representations do not reach minimum fitness and constraint values. The domain-specific GAs prove to be the most efficient and are used to test CSO sensitivity to energy costs, CSO penalties, and pressurization constraint values. The results show that CSO volumes are highly dependent on the tunnel pressurization constraint, with reductions of 13% to 77% possible with less conservative operational strategies. Because current management practices may not account for varying costs at CSO locations and electricity rate changes in the summer and winter, the sensitivity of the results is evaluated for variable seasonal and diurnal CSO penalty costs and electricity-related system maintenance costs, as well as different sluice gate constraint levels. These findings indicate

  10. A Multiple Time Stepping Algorithm for Efficient Multiscale Modeling of Platelets Flowing in Blood Plasma

    PubMed Central

    Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny

    2015-01-01

    We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3–4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations. PMID:25641983

  11. Adaptive time stepping algorithm for Lagrangian transport models: Theory and idealised test cases

    NASA Astrophysics Data System (ADS)

    Shah, Syed Hyder Ali Muttaqi; Heemink, Arnold Willem; Gräwe, Ulf; Deleersnijder, Eric

    2013-08-01

    Random walk simulations have an excellent potential in marine and oceanic modelling. This is essentially due to their relative simplicity and their ability to represent advective transport without being plagued by the deficiencies of the Eulerian methods. The physical and mathematical foundations of random walk modelling of turbulent diffusion have become solid over the years. Random walk models rest on the theory of stochastic differential equations. Unfortunately, the latter and the related numerical aspects have not attracted much attention in the oceanic modelling community. The main goal of this paper is to help bridge the gap by developing an efficient adaptive time stepping algorithm for random walk models. Its performance is examined on two idealised test cases of turbulent dispersion; (i) pycnocline crossing and (ii) non-flat isopycnal diffusion, which are inspired by shallow-sea dynamics and large-scale ocean transport processes, respectively. The numerical results of the adaptive time stepping algorithm are compared with the fixed-time increment Milstein scheme, showing that the adaptive time stepping algorithm for Lagrangian random walk models is more efficient than its fixed step-size counterpart without any loss in accuracy.

  12. The Semi-implicit Time-stepping Algorithm in MH4D

    NASA Astrophysics Data System (ADS)

    Vadlamani, Srinath; Shumlak, Uri; Marklin, George; Meier, Eric; Lionello, Roberto

    2006-10-01

    The Plasma Science and Innovation Center (PSI Center) at the University of Washington is developing MHD codes to accurately model Emerging Concept (EC) devices. Examination of the semi-implicit time stepping algorithm implemented in the tetrahedral mesh MHD simulation code, MH4D, is presented. The time steps for standard explicit methods, which are constrained by the Courant-Friedrichs-Lewy (CFL) condition, are typically small for simulations of EC experiments due to the large Alfven speed. The CFL constraint is more severe with a tetrahedral mesh because of the irregular cell geometry. The semi-implicit algorithm [1] removes the fast waves constraint, thus allowing for larger time steps. We will present the implementation method of this algorithm, and numerical results for test problems in simple geometry. Also, we will present the effectiveness in simulations of complex geometry, similar to the ZaP [2] experiment at the University of Washington. References: [1]Douglas S. Harned and D. D. Schnack, Semi-implicit method for long time scale magnetohy drodynamic computations in three dimensions, JCP, Volume 65, Issue 1, July 1986, Pages 57-70. [2]U. Shumlak, B. A. Nelson, R. P. Golingo, S. L. Jackson, E. A. Crawford, and D. J. Den Hartog, Sheared flow stabilization experiments in the ZaP flow Zpinch, Phys. Plasmas 10, 1683 (2003).

  13. Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    1991-01-01

    Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.

  14. Building a better leapfrog. [an algorithm for ensuring time symmetry in any integration scheme

    NASA Technical Reports Server (NTRS)

    Hut, Piet; Makino, Jun; Mcmillan, Steve

    1995-01-01

    In stellar dynamical computer simulations, as well as other types of simulations using particles, time step size is often held constant in order to guarantee a high degree of energy conservation. In many applications, allowing the time step size to change in time can offer a great saving in computational cost, but variable-size time steps usually imply a substantial degradation in energy conservation. We present a meta-algorithm' for choosing time steps in such a way as to guarantee time symmetry in any integration scheme, thus allowing vastly improved energy conservation for orbital calculations with variable time steps. We apply the algorithm to the familiar leapfrog scheme, and generalize to higher order integration schemes, showing how the stability properties of the fixed-step leapfrog scheme can be extended to higher order, variable-step integrators such as the Hermite method. We illustrate the remarkable properties of these time-symmetric integrators for the case of a highly eccentric elliptical Kepler orbit and discuss applications to more complex problems.

  15. Fast Transformation of Temporal Plans for Efficient Execution

    NASA Technical Reports Server (NTRS)

    Tsamardinos, Ioannis; Muscettola, Nicola; Morris, Paul

    1998-01-01

    Temporal plans permit significant flexibility in specifying the occurrence time of events. Plan execution can make good use of that flexibility. However, the advantage of execution flexibility is counterbalanced by the cost during execution of propagating the time of occurrence of events throughout the flexible plan. To minimize execution latency, this propagation needs to be very efficient. Previous work showed that every temporal plan can be reformulated as a dispatchable plan, i.e., one for which propagation to immediate neighbors is sufficient. A simple algorithm was given that finds a dispatchable plan with a minimum number of edges in cubic time and quadratic space. In this paper, we focus on the efficiency of the reformulation process, and improve on that result. A new algorithm is presented that uses linear space and has time complexity equivalent to Johnson s algorithm for all-pairs shortest-path problems. Experimental evidence confirms the practical effectiveness of the new algorithm. For example, on a large commercial application, the performance is improved by at least two orders of magnitude. We further show that the dispatchable plan, already minimal in the total number of edges, can also be made minimal in the maximum number of edges incoming or outgoing at any node.

  16. A simplification of the backpropagation-through-time algorithm for optimal neurocontrol.

    PubMed

    Bersini, H; Gorrini, V

    1997-01-01

    Backpropagation-through-time (BPTT) is the temporal extension of backpropagation which allows a multilayer neural network to approximate an optimal state-feedback control law provided some prior knowledge (Jacobian matrices) of the process is available. In this paper, a simplified version of the BPTT algorithm is proposed which more closely respects the principle of optimality of dynamic programming. Besides being simpler, the new algorithm is less time-consuming and allows in some cases the discovery of better control laws. A formal justification of this simplification is attempted by mixing the Lagrangian calculus underlying BPTT with Bellman-Hamilton-Jacobi equations. The improvements due to this simplification are illustrated by two optimal control problems: the rendezvous and the bioreactor. PMID:18255645

  17. Real-time low-energy fall detection algorithm with a programmable truncated MAC.

    PubMed

    de la Guia Solaz, Manuel; Bourke, Alan; Conway, Richard; Nelson, John; Olaighin, Gearoid

    2010-01-01

    The ability to discriminate between falls and activities of daily living (ADL) has been investigated by using tri-axial accelerometer sensors, mounted on the trunk and using simulated falls performed by young healthy subjects under supervised conditions and ADL performed by elderly subjects. In this paper we propose a power-aware real-time fall detection integrated circuit (IC) that can distinguish Falls from ADL by processing the accelerations measured during 240 falls and 240 ADL. In the proposed fixed point custom DSP architecture, a threshold algorithm was implemented to analyze the effectiveness of Programmable Truncated Multiplication regarding power reduction while maintaining a high output accuracy. The presented system runs a real time implementation of the algorithm on a low power architecture that allows up to 23% power savings through its digital blocks when compared to a standard implementation, without any accuracy loss. PMID:21095956

  18. Real-time noise mitigation algorithms for space and nuclear radiation environments

    NASA Astrophysics Data System (ADS)

    Redmond, Neal J.; Hill, Janeil; Lowell, Robert; Byers, Wheaton; Retzler, John P.; Andrews, Allen R.; Mackin, Paul R.

    1997-10-01

    This paper addresses small targets and signal processing from the perspective of rejecting radiation noise spikes. Nuclear and space radiation create noise spikes inside infrared detectors causing an overwhelming number of false alarms, if steps are not taken to mitigate the radiation noise spikes. Traditional radiation device/circuit hardening methods are effective, but must be reapplied to each new technology forcing special design point solutions and parts that are increasingly economically nonviable. Real-time noise mitigation algorithms represent a general hardening solution and have been demonstrated for both interceptor seeker and space surveillance sensor applications. A new combined HWIL/radiation synthetic test environment has been developed that enables real-time algorithm evaluation over the total system performance envelope, under flight motion simulation and fully dynamic optical sensor scene stimulation. This work was sponsored by the Defense Special Weapons Agency.

  19. New Algorithms for Computing the Time-to-Collision in Freeway Traffic Simulation Models

    PubMed Central

    Hou, Jia; List, George F.; Guo, Xiucheng

    2014-01-01

    Ways to estimate the time-to-collision are explored. In the context of traffic simulation models, classical lane-based notions of vehicle location are relaxed and new, fast, and efficient algorithms are examined. With trajectory conflicts being the main focus, computational procedures are explored which use a two-dimensional coordinate system to track the vehicle trajectories and assess conflicts. Vector-based kinematic variables are used to support the calculations. Algorithms based on boxes, circles, and ellipses are considered. Their performance is evaluated in the context of computational complexity and solution time. Results from these analyses suggest promise for effective and efficient analyses. A combined computation process is found to be very effective. PMID:25628650

  20. Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels

    NASA Astrophysics Data System (ADS)

    Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.

    2016-06-01

    We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.

  1. Representation independent algorithms for molecular response calculations in time-dependent self-consistent field theories

    NASA Astrophysics Data System (ADS)

    Tretiak, Sergei; Isborn, Christine M.; Niklasson, Anders M. N.; Challacombe, Matt

    2009-02-01

    Four different numerical algorithms suitable for a linear scaling implementation of time-dependent Hartree-Fock and Kohn-Sham self-consistent field theories are examined. We compare the performance of modified Lanczos, Arooldi, Davidson, and Rayleigh quotient iterative procedures to solve the random-phase approximation (RPA) (non-Hermitian) and Tamm-Dancoff approximation (TDA) (Hermitian) eigenvalue equations in the molecular orbital-free framework. Semiempirical Hamiltonian models are used to numerically benchmark algorithms for the computation of excited states of realistic molecular systems (conjugated polymers and carbon nanotubes). Convergence behavior and stability are tested with respect to a numerical noise imposed to simulate linear scaling conditions. The results single out the most suitable procedures for linear scaling large-scale time-dependent perturbation theory calculations of electronic excitations.

  2. Representation independent algorithms for molecular response calculations in time-dependent self-consistent field theories

    SciTech Connect

    Tretiak, Sergei

    2008-01-01

    Four different numerical algorithms suitable for a linear scaling implementation of time-dependent Hartree-Fock and Kohn-Sham self-consistent field theories are examined. We compare the performance of modified Lanczos, Arooldi, Davidson, and Rayleigh quotient iterative procedures to solve the random-phase approximation (RPA) (non-Hermitian) and Tamm-Dancoff approximation (TDA) (Hermitian) eigenvalue equations in the molecular orbital-free framework. Semiempirical Hamiltonian models are used to numerically benchmark algorithms for the computation of excited states of realistic molecular systems (conjugated polymers and carbon nanotubes). Convergence behavior and stability are tested with respect to a numerical noise imposed to simulate linear scaling conditions. The results single out the most suitable procedures for linear scaling large-scale time-dependent perturbation theory calculations of electronic excitations.

  3. An Adaptive Algorithm for Detection of Onset Times of Low Amplitude Seismic Phases Based on Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Gravirov, V. V.; Kislov, K. V.; Ovchinnikova, T.

    2010-12-01

    A very important task for detection of onset times of low amplitude seismic phases is to identify the type of seismic source, or the problem of seismic signal classification. The problem consists in using the seismogram to find the cause of the recorded event, that is, to detect an earthquake in natural noise. The ultimate goal of processing is to measure the characteristics of a useful signal in a situation where the seismogram is a complicated superposition of very different types of wave motion. The very process of obtaining these characteristics can be viewed as a mathematical problem in its own right. The process is based on a search for patterns that connect the original signal to the physical parameters listed above, as well as formulating these patterns as efficient computational techniques. Unlike the Fourier transform, the wavelet transform provides a 2D representation of the signal under study, frequency and time being treated as independent variables. As a result, we are able to examine the properties of the signal in a physical space (the time) and a scale space (the frequency). The detection of events in noise can successfully be dealt with by neural networks.The algorithm in question is designed for the fastest real time detection of a sudden change in the properties of a process as more information is becoming available. The problem is formulated so that the onset of low amplitude seismic phases is to be automatically identified during a time interval no longer than four seconds. The algorithm is based on the continuous wavelet transform and neural network. This is an adaptive algorithm, since it incorporates time-dependent individual characteristics of the time series of interest. This study was based on a data base of seismic signals consisting of more than 120 sample earthquakes and natural noise. Different wavelet types have been tried during the debugging of the algorithm: Haar, Daubechies of different orders, Symlet of different orders, Meyer

  4. The Time Series Technique for Aerosol Retrievals over Land from MODIS: Algorithm MAIAC

    NASA Technical Reports Server (NTRS)

    Lyapustin, Alexei; Wang, Yujie

    2008-01-01

    2.1 m channel (B7) for the purpose of aerosol retrieval. Obviously, the subsequent atmospheric correction will produce the same SR in the red and blue bands as predicted, i.e. an empirical function of 2.1. In other words, the spectral, spatial and temporal variability of surface reflectance in the Blue and Red bands appears borrowed from band B7. This may have certain implications for the vegetation and global carbon analysis because the chlorophyll-sensing bands B1, B3 are effectively substituted in terms of variability by band B7, which is sensitive to the plant liquid water. This chapter describes a new recently developed generic aerosol-surface retrieval algorithm for MODIS. The Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm simultaneously retrieves AOT and surface bi-directional reflection factor (BRF) using the time series of MODIS measurements.

  5. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  6. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  7. A real-time simulation evaluation of an advanced detection, isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  8. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  9. Development of a real-time model based safety monitoring algorithm for the SSME

    NASA Astrophysics Data System (ADS)

    Norman, A. M.; Maram, J.; Coleman, P.; D'Valentine, M.; Steffens, A.

    1992-07-01

    A safety monitoring system for the SSME incorporating a real time model of the engine has been developed for LeRC as a task of the LeRC Life Prediction for Rocket Engines contract, NAS3-25884. This paper describes the development of the algorithm and model to date, their capabilities and limitations, results of simulation tests, lessons learned, and the plans for implementation and test of the system.

  10. Novel real-time volumetric tool segmentation algorithm for intraoperative microscope integrated OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Viehland, Christian; Keller, Brenton; Carrasco-Zevallos, Oscar; Cunefare, David; Shen, Liangbo; Toth, Cynthia; Farsiu, Sina; Izatt, Joseph A.

    2016-03-01

    Optical coherence tomography (OCT) allows for micron scale imaging of the human retina and cornea. Current generation research and commercial intrasurgical OCT prototypes are limited to live B-scan imaging. Our group has developed an intraoperative microscope integrated OCT system capable of live 4D imaging. With a heads up display (HUD) 4D imaging allows for dynamic intrasurgical visualization of tool tissue interaction and surgical maneuvers. Currently our system relies on operator based manual tracking to correct for patient motion and motion caused by the surgeon, to track the surgical tool, and to display the correct B-scan to display on the HUD. Even when tracking only bulk motion, the operator sometimes lags behind and the surgical region of interest can drift out of the OCT field of view. To facilitate imaging we report on the development of a fast volume based tool segmentation algorithm. The algorithm is based on a previously reported volume rendering algorithm and can identify both the tool and retinal surface. The algorithm requires 45 ms per volume for segmentation and can be used to actively place the B-scan across the tool tissue interface. Alternatively, real-time tool segmentation can be used to allow the surgeon to use the surgical tool as an interactive B-scan pointer.

  11. Some Fractal Dimension Algorithms and Their Application to Time Series Associated with the Dst a Geomagnetic Index

    NASA Astrophysics Data System (ADS)

    Cervantes, F.; Gonzalez, J.; Real, C.; Hoyos, L.

    2012-12-01

    ABSTRACT: Chaotic invariants like fractal dimensions are used to characterize non-linear time series. The fractal dimension is an important characteristic of fractals that contains information about their geometrical structure at multiple scales. In this work four fractal dimension estimation algorithms are applied to non-linear time series. The algorithms employed are the Higuchi's algorithm, the Petrosian's algorithm, the Katz's Algorithm and the Box counting method. The analyzed time series are associated with natural phenomena, the Dst a geomagnetic index which monitors the world wide magnetic storm; the Dst index is a global indicator of the state of the Earth's geomagnetic activity. The time series used in this work show a behavior self-similar, which depend on the time scale of measurements. It is also observed that fractal dimensions may not be constant over all time scales.

  12. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  13. A new algorithm for segmentation of cardiac quiescent phases and cardiac time intervals using seismocardiography

    NASA Astrophysics Data System (ADS)

    Jafari Tadi, Mojtaba; Koivisto, Tero; Pänkäälä, Mikko; Paasio, Ari; Knuutila, Timo; Teräs, Mika; Hänninen, Pekka

    2015-03-01

    Systolic time intervals (STI) have significant diagnostic values for a clinical assessment of the left ventricle in adults. This study was conducted to explore the feasibility of using seismocardiography (SCG) to measure the systolic timings of the cardiac cycle accurately. An algorithm was developed for the automatic localization of the cardiac events (e.g. the opening and closing moments of the aortic and mitral valves). Synchronously acquired SCG and electrocardiography (ECG) enabled an accurate beat to beat estimation of the electromechanical systole (QS2), pre-ejection period (PEP) index and left ventricular ejection time (LVET) index. The performance of the algorithm was evaluated on a healthy test group with no evidence of cardiovascular disease (CVD). STI values were corrected based on Weissler's regression method in order to assess the correlation between the heart rate and STIs. One can see from the results that STIs correlate poorly with the heart rate (HR) on this test group. An algorithm was developed to visualize the quiescent phases of the cardiac cycle. A color map displaying the magnitude of SCG accelerations for multiple heartbeats visualizes the average cardiac motions and thereby helps to identify quiescent phases. High correlation between the heart rate and the duration of the cardiac quiescent phases was observed.

  14. Speedup properties of phases in the execution profile of distributed parallel programs

    SciTech Connect

    Carlson, B.M.; Wagner, T.D.; Dowdy, L.W.; Worley, P.H.

    1992-08-01

    The execution profile of a distributed-memory parallel program specifies the number of busy processors as a function of time. Periods of homogeneous processor utilization are manifested in many execution profiles. These periods can usually be correlated with the algorithms implemented in the underlying parallel code. Three families of methods for smoothing execution profile data are presented. These approaches simplify the problem of detecting end points of periods of homogeneous utilization. These periods, called phases, are then examined in isolation, and their speedup characteristics are explored. A specific workload executed on an Intel iPSC/860 is used for validation of the techniques described.

  15. Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm.

    PubMed

    Zhang, Zhiyong; Smith, Pieter E S; Frydman, Lucio

    2014-11-21

    Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. By porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns. PMID:25416883

  16. Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm

    PubMed Central

    Zhang, Zhiyong; Frydman, Lucio

    2014-01-01

    Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. By porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns. PMID:25416883

  17. Reducing acquisition times in multidimensional NMR with a time-optimized Fourier encoding algorithm

    SciTech Connect

    Zhang, Zhiyong; Smith, Pieter E. S.; Frydman, Lucio

    2014-11-21

    Speeding up the acquisition of multidimensional nuclear magnetic resonance (NMR) spectra is an important topic in contemporary NMR, with central roles in high-throughput investigations and analyses of marginally stable samples. A variety of fast NMR techniques have been developed, including methods based on non-uniform sampling and Hadamard encoding, that overcome the long sampling times inherent to schemes based on fast-Fourier-transform (FFT) methods. Here, we explore the potential of an alternative fast acquisition method that leverages a priori knowledge, to tailor polychromatic pulses and customized time delays for an efficient Fourier encoding of the indirect domain of an NMR experiment. By porting the encoding of the indirect-domain to the excitation process, this strategy avoids potential artifacts associated with non-uniform sampling schemes and uses a minimum number of scans equal to the number of resonances present in the indirect dimension. An added convenience is afforded by the fact that a usual 2D FFT can be used to process the generated data. Acquisitions of 2D heteronuclear correlation NMR spectra on quinine and on the anti-inflammatory drug isobutyl propionic phenolic acid illustrate the new method's performance. This method can be readily automated to deal with complex samples such as those occurring in metabolomics, in in-cell as well as in in vivo NMR applications, where speed and temporal stability are often primary concerns.

  18. Development of a new time domain-based algorithm for train detection and axle counting

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.

    2015-12-01

    This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.

  19. DynPeak: An Algorithm for Pulse Detection and Frequency Analysis in Hormonal Time Series

    PubMed Central

    Vidal, Alexandre; Zhang, Qinghua; Médigue, Claire; Fabre, Stéphane; Clément, Frédérique

    2012-01-01

    The endocrine control of the reproductive function is often studied from the analysis of luteinizing hormone (LH) pulsatile secretion by the pituitary gland. Whereas measurements in the cavernous sinus cumulate anatomical and technical difficulties, LH levels can be easily assessed from jugular blood. However, plasma levels result from a convolution process due to clearance effects when LH enters the general circulation. Simultaneous measurements comparing LH levels in the cavernous sinus and jugular blood have revealed clear differences in the pulse shape, the amplitude and the baseline. Besides, experimental sampling occurs at a relatively low frequency (typically every 10 min) with respect to LH highest frequency release (one pulse per hour) and the resulting LH measurements are noised by both experimental and assay errors. As a result, the pattern of plasma LH may be not so clearly pulsatile. Yet, reliable information on the InterPulse Intervals (IPI) is a prerequisite to study precisely the steroid feedback exerted on the pituitary level. Hence, there is a real need for robust IPI detection algorithms. In this article, we present an algorithm for the monitoring of LH pulse frequency, basing ourselves both on the available endocrinological knowledge on LH pulse (shape and duration with respect to the frequency regime) and synthetic LH data generated by a simple model. We make use of synthetic data to make clear some basic notions underlying our algorithmic choices. We focus on explaining how the process of sampling affects drastically the original pattern of secretion, and especially the amplitude of the detectable pulses. We then describe the algorithm in details and perform it on different sets of both synthetic and experimental LH time series. We further comment on how to diagnose possible outliers from the series of IPIs which is the main output of the algorithm. PMID:22802933

  20. Parallel implementation of the time-evolving block decimation algorithm for the Bose-Hubbard model

    NASA Astrophysics Data System (ADS)

    Urbanek, Miroslav; Soldán, Pavel

    2016-02-01

    A system of ultracold atoms in an optical lattice represents a powerful experimental setup for testing the fundamentals of quantum mechanics. While its microscopic interaction mechanisms are well understood, the system behavior for a moderate number of particles is difficult to simulate due to a high dimension of its many-body space. This article presents TEBDOL, a parallel implementation of the time-evolving block decimation (TEBD) algorithm that can efficiently simulate time evolution of a one-dimensional chain of atoms in optical lattices. We investigate the parallelization strategy and the strong and weak scaling with the number of processes.

  1. On the effect of timing errors in run length codes. [redundancy removal algorithms for digital channels

    NASA Technical Reports Server (NTRS)

    Wilkins, L. C.; Wintz, P. A.

    1975-01-01

    Many redundancy removal algorithms employ some sort of run length code. Blocks of timing words are coded with synchronization words inserted between blocks. The probability of incorrectly reconstructing a sample because of a channel error in the timing data is a monotonically nondecreasing function of time since the last synchronization word. In this paper we compute the 'probability that the accumulated magnitude of timing errors equal zero' as a function of time since the last synchronization word for a zero-order predictor (ZOP). The result is valid for any data source that can be modeled by a first-order Markov chain and any digital channel that can be modeled by a channel transition matrix. An example is presented.

  2. Retiring the central executive.

    PubMed

    Logie, Robert H

    2016-10-01

    Reasoning, problem solving, comprehension, learning and retrieval, inhibition, switching, updating, or multitasking are often referred to as higher cognition, thought to require control processes or the use of a central executive. However, the concept of an executive controller begs the question of what is controlling the controller and so on, leading to an infinite hierarchy of executives or "homunculi". In what is now a QJEP citation classic, Baddeley [Baddeley, A. D. (1996). Exploring the central executive. Quarterly Journal of Experimental Psychology, 49A, 5-28] referred to the concept of a central executive in cognition as a "conceptual ragbag" that acted as a placeholder umbrella term for aspects of cognition that are complex, were poorly understood at the time, and most likely involve several different cognitive functions working in concert. He suggested that with systematic empirical research, advances in understanding might progress sufficiently to allow the executive concept to be "sacked". This article offers an overview of the 1996 article and of some subsequent systematic research and argues that after two decades of research, there is sufficient advance in understanding to suggest that executive control might arise from the interaction among multiple different functions in cognition that use different, but overlapping, brain networks. The article concludes that the central executive concept might now be offered a dignified retirement. PMID:26821744

  3. Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture

    NASA Astrophysics Data System (ADS)

    Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea

    2014-05-01

    Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.

  4. A time-split finite-volume algorithm for three-dimensional flow-field simulation

    NASA Technical Reports Server (NTRS)

    Hung, C. M.; Kordulla, W.

    1983-01-01

    A general finite-volume algorithm is developed for solving three-dimensional, time-dependent, compressible Navier-Stokes equations for high Reynolds number flows over an arbitrary geometry. This algorithm adapts MacCormack's (1982) explicit-implicit scheme to a time-split, three-dimensional finite-volume concept in a general coordinate system. It is shown that the thin-layer approximation in all three spatial directions significantly reduces the evaluation of viscous terms and allows the algorithm to solve more complicated geometries with all boundaries in two or all three directions. The calculated results using this method are found to be in good agreement with the experimental measurements of a blunt-fin induced shock wave and boundary-layer interaction problems. Observations of the existence of peak pressure, primary horseshoe and secondary vortices, and reversed supersonic zones show that computational fluid dynamics can effectively supplement the wind tunnel tests for aerodynamic design as well as for understanding basic fluid dynamics.

  5. Algorithm for removing scalp signals from functional near-infrared spectroscopy signals in real time using multidistance optodes

    NASA Astrophysics Data System (ADS)

    Kiguchi, Masashi; Funane, Tsukasa

    2014-11-01

    A real-time algorithm for removing scalp-blood signals from functional near-infrared spectroscopy signals is proposed. Scalp and deep signals have different dependencies on the source-detector distance. These signals were separated using this characteristic. The algorithm was validated through an experiment using a dynamic phantom in which shallow and deep absorptions were independently changed. The algorithm for measurement of oxygenated and deoxygenated hemoglobins using two wavelengths was explicitly obtained. This algorithm is potentially useful for real-time systems, e.g., brain-computer interfaces and neuro-feedback systems.

  6. Mining biological information from 3D short time-series gene expression data: the OPTricluster algorithm

    PubMed Central

    2012-01-01

    Background Nowadays, it is possible to collect expression levels of a set of genes from a set of biological samples during a series of time points. Such data have three dimensions: gene-sample-time (GST). Thus they are called 3D microarray gene expression data. To take advantage of the 3D data collected, and to fully understand the biological knowledge hidden in the GST data, novel subspace clustering algorithms have to be developed to effectively address the biological problem in the corresponding space. Results We developed a subspace clustering algorithm called Order Preserving Triclustering (OPTricluster), for 3D short time-series data mining. OPTricluster is able to identify 3D clusters with coherent evolution from a given 3D dataset using a combinatorial approach on the sample dimension, and the order preserving (OP) concept on the time dimension. The fusion of the two methodologies allows one to study similarities and differences between samples in terms of their temporal expression profile. OPTricluster has been successfully applied to four case studies: immune response in mice infected by malaria (Plasmodium chabaudi), systemic acquired resistance in Arabidopsis thaliana, similarities and differences between inner and outer cotyledon in Brassica napus during seed development, and to Brassica napus whole seed development. These studies showed that OPTricluster is robust to noise and is able to detect the similarities and differences between biological samples. Conclusions Our analysis showed that OPTricluster generally outperforms other well known clustering algorithms such as the TRICLUSTER, gTRICLUSTER and K-means; it is robust to noise and can effectively mine the biological knowledge hidden in the 3D short time-series gene expression data. PMID:22475802

  7. Local algorithm for computing complex travel time based on the complex eikonal equation

    NASA Astrophysics Data System (ADS)

    Huang, Xingguo; Sun, Jianguo; Sun, Zhangqing

    2016-04-01

    The traditional algorithm for computing the complex travel time, e.g., dynamic ray tracing method, is based on the paraxial ray approximation, which exploits the second-order Taylor expansion. Consequently, the computed results are strongly dependent on the width of the ray tube and, in regions with dramatic velocity variations, it is difficult for the method to account for the velocity variations. When solving the complex eikonal equation, the paraxial ray approximation can be avoided and no second-order Taylor expansion is required. However, this process is time consuming. In this case, we may replace the global computation of the whole model with local computation by taking both sides of the ray as curved boundaries of the evanescent wave. For a given ray, the imaginary part of the complex travel time should be zero on the central ray. To satisfy this condition, the central ray should be taken as a curved boundary. We propose a nonuniform grid-based finite difference scheme to solve the curved boundary problem. In addition, we apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno technology for obtaining the imaginary slowness used to compute the complex travel time. The numerical experiments show that the proposed method is accurate. We examine the effectiveness of the algorithm for the complex travel time by comparing the results with those from the dynamic ray tracing method and the Gauss-Newton Conjugate Gradient fast marching method.

  8. Some elements of mathematical information theory and total inversion algorithm applied to travel time inversion

    NASA Astrophysics Data System (ADS)

    Martínez, M. D.; Lana, X.

    1991-03-01

    The total inversion algorithm and some elements of Mathematical Information Theory are used in the treatment of travel-time data belonging to a seismic refraction experiment from the southern segment (Sardinia Channel) of the European Geotraverse Project. The inversion algorithm allows us to improve a preliminary propagating model obtained by means of usual trial and error procedure and to quantify the resolution degree of parameters defining the crust and upper mantle of such a model. Concepts related to Mathematical Information Theory detect some seismic profiles of the refraction experiment which give the most homogeneous coverage of the model in terms of number of trajectories crossing it. Finally, the efficiency of the inversion procedure is quantified and the uncertainties regarding knowledge of different parts of the model are also evaluated.

  9. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  10. Cognitive programs: software for attention's executive

    PubMed Central

    Tsotsos, John K.; Kruijne, Wouter

    2014-01-01

    What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual

  11. Serotoninergic and dopaminergic modulation of cortico-striatal circuit in executive and attention deficits induced by NMDA receptor hypofunction in the 5-choice serial reaction time task

    PubMed Central

    Carli, Mirjana; Invernizzi, Roberto W.

    2014-01-01

    Executive functions are an emerging propriety of neuronal processing in circuits encompassing frontal cortex and other cortical and subcortical brain regions such as basal ganglia and thalamus. Glutamate serves as the major neurotrasmitter in these circuits where glutamate receptors of NMDA type play key role. Serotonin and dopamine afferents are in position to modulate intrinsic glutamate neurotransmission along these circuits and in turn to optimize circuit performance for specific aspects of executive control over behavior. In this review, we focus on the 5-choice serial reaction time task which is able to provide various measures of attention and executive control over performance in rodents and the ability of prefrontocortical and striatal serotonin 5-HT1A, 5-HT2A, and 5-HT2C as well as dopamine D1- and D2-like receptors to modulate different aspects of executive and attention disturbances induced by NMDA receptor hypofunction in the prefrontal cortex. These behavioral studies are integrated with findings from microdialysis studies. These studies illustrate the control of attention selectivity by serotonin 5-HT1A, 5-HT2A, 5-HT2C, and dopamine D1- but not D2-like receptors and a distinct contribution of these cortical and striatal serotonin and dopamine receptors to the control of different aspects of executive control over performance such as impulsivity and compulsivity. An association between NMDA antagonist-induced increase in glutamate release in the prefrontal cortex and attention is suggested. Collectively, this review highlights the functional interaction of serotonin and dopamine with NMDA dependent glutamate neurotransmission in the cortico-striatal circuitry for specific cognitive demands and may shed some light on how dysregulation of neuronal processing in these circuits may be implicated in specific neuropsychiatric disorders. PMID:24966814

  12. Faster learning algorithm convergence utilizing a combined time-frequency representation as basis

    NASA Astrophysics Data System (ADS)

    Hendriks, A. J.; Uys, Hermann; du Plessis, Anton; Steenkamp, Christine

    2013-10-01

    Light is capable of directly manipulating and probing molecular dynamics at its most fundamental level. One versatile approach to influencing such dynamics exploits temporally shaped femtosecond laser pulses. Oftentimes the control mechanisms necessary to induce a desired reaction cannot be determined theoretically a priori. However under certain circumstances these mechanisms can be extracted experimentally through trial and error. This can be implemented systematically by using an evolutionary learning algorithm (LA) with closed loop feedback. Most frequently, pulse shaping algorithms operate within either the time or frequency domain, however seldom both. This may influence the physical insight gained due to dependence on the search basis, as well as influence the speed the algorithm takes to converge. As an alternative to the Fourier domain basis, we make use of a combined time-frequency representation known as the von Neumann basis where we observe temporal and spectral effects at the same time. We report on the numerical and experimental results obtained using the Fourier, as well as the von Neumann basis to maximize the second harmonic generation (SHG) output in a non-linear crystal. We show that the von Neumann representation converges faster than the Fourier domain when compared to searches in the Fourier domain. We also show a reduced parameter space is required for the Fourier domain to converge efficiently, but not for von Neumann domain. Finally we show the highest SHG signal is not only a consequence of the shortest pulse, but that the pulse central frequency also plays a key role. Taken together these results suggest that the von Neumann basis can be used as a viable alternative to the Fourier domain with improved convergence time and potentially deeper physical insight.

  13. A globally convergent matrix-free algorithm for implicit time-marching schemes arising in finite element analysis in fluids

    NASA Technical Reports Server (NTRS)

    Johan, Zdenek; Hughes, Thomas J. R.; Shakib, Farzin

    1991-01-01

    A solution procedure for solving nonlinear time-marching problems is presented. The nonsymmetric systems of equations arising from a Newton-type linearization of these time-marching problems are solved using an iterative strategy based on the generalized minimal residual (GMRES) algorithm. Matrix-free techniques leading to reduction in storage are presented. Incorporation of a linesearch algorithm in the Newton-GMRES scheme is discussed. An automatic time-increment control strategy is developed to increase the stability of the time-marching process. High-speed flow computations demonstrate the effectiveness of these algorithms.

  14. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  15. Decomposing time series data by a non-negative matrix factorization algorithm with temporally constrained coefficients.

    PubMed

    Cheung, Vincent C K; Devarajan, Karthik; Severini, Giacomo; Turolla, Andrea; Bonato, Paolo

    2015-08-01

    The non-negative matrix factorization algorithm (NMF) decomposes a data matrix into a set of non-negative basis vectors, each scaled by a coefficient. In its original formulation, the NMF assumes the data samples and dimensions to be independently distributed, making it a less-than-ideal algorithm for the analysis of time series data with temporal correlations. Here, we seek to derive an NMF that accounts for temporal dependencies in the data by explicitly incorporating a very simple temporal constraint for the coefficients into the NMF update rules. We applied the modified algorithm to 2 multi-dimensional electromyographic data sets collected from the human upper-limb to identify muscle synergies. We found that because it reduced the number of free parameters in the model, our modified NMF made it possible to use the Akaike Information Criterion to objectively identify a model order (i.e., the number of muscle synergies composing the data) that is more functionally interpretable, and closer to the numbers previously determined using ad hoc measures. PMID:26737046

  16. Application of the Trend Filtering Algorithm for Photometric Time Series Data

    NASA Astrophysics Data System (ADS)

    Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.

    2016-08-01

    Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.

  17. A real-time pressure estimation algorithm for closed-loop combustion control

    NASA Astrophysics Data System (ADS)

    Al-Durra, Ahmed; Canova, Marcello; Yurkovich, Stephen

    2013-07-01

    The cylinder pressure is arguably the most important variable characterizing the combustion process in internal combustion engines. In light of the recent advances in combustion technologies and in engine control, the use of cylinder pressure is now frequently considered as a feedback signal for closed-loop combustion control algorithms. In order to generate an accurate pressure trace for real-time combustion control and diagnostics, the output of the in-cylinder pressure transducer must be conditioned with signal processing methods to mitigate the well-known issues of offset and noise. While several techniques have been proposed for processing the cylinder pressure signal with limited computational burden, most of the available methods still require one to apply low-pass filters or moving average windows in order to mitigate the noise. This ultimately limits the opportunity of exploiting the in-cylinder pressure feedback for a cycle-by-cycle control of the combustion process. To this extent, this paper presents an estimation algorithm that extracts the pressure signal from the in-cylinder sensor in real-time, allowing for estimating the 50% burn rate location and IMEP on a cycle-by-cycle basis. The proposed approach relies on a model-based estimation algorithm whose starting point is a crank-angle based engine combustion model that predicts the in-cylinder pressure from the definition of a burn rate function. Linear parameter varying (LPV) techniques are then used to expand the region of estimation to cover the engine operating map, as well as allowing for real-time cylinder estimation during transients. The estimator is tested on the experimental data collected on an engine dynamometer as well as on a high-fidelity engine simulator. The results obtained show the effectiveness of the estimator in reconstructing the cylinder pressure on a crank-angle basis and in rejecting measurement noise and modeling errors, with considerably low computation effort.

  18. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  19. Multi-Objective Optimization of Heat Load and Run Time for CEBAF Linacs Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Reeves, Cody; Terzic, Balsa; Hofler, Alicia

    2014-09-01

    The Continuous Electron Beam Accelerator Facility (CEBAF) consists of two linear accelerators (Linacs) connected by arcs. Within each Linac, there are 200 niobium cavities that use superconducting radio frequency (SRF) to accelerate electrons. The gradients for the cavities are selected to optimize two competing objectives: heat load (the energy required to cool the cavities) and trip rate (how often the beam turns off within an hour). This results in a multidimensional, multi-objective, nonlinear system of equations that is not readily solved by analytical methods. This study improved a genetic algorithm (GA), which applies the concept of natural selection. The primary focus was making this GA more efficient to allow for more cost-effective solutions in the same amount of computation time. Two methods used were constraining the maximum value of the ob-jectives and also utilizing previously simulated solutions as the initial generation. A third method of interest involved refining the GA by combining the two objectives into a single weighted-sum objective, which collapses the set of optimal solutions into a single point. By combining these methods, the GA can be made 128 times as effective, reducing computation time from 30 min to 12 sec. This is crucial for when a cavity must be turned off, a new solution needs to be computed quickly. This work is of particular interest since it provides an efficient algorithm that can be easily adapted to any Linac facility.

  20. Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahdavi, Seyed Hossein; Razak, Hashim Abdul

    2016-06-01

    This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.

  1. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    PubMed

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  2. Tuning algorithms for fractional order internal model controllers for time delay processes

    NASA Astrophysics Data System (ADS)

    Muresan, Cristina I.; Dutta, Abhishek; Dulf, Eva H.; Pinar, Zehra; Maxim, Anca; Ionescu, Clara M.

    2016-03-01

    This paper presents two tuning algorithms for fractional-order internal model control (IMC) controllers for time delay processes. The two tuning algorithms are based on two specific closed-loop control configurations: the IMC control structure and the Smith predictor structure. In the latter, the equivalency between IMC and Smith predictor control structures is used to tune a fractional-order IMC controller as the primary controller of the Smith predictor structure. Fractional-order IMC controllers are designed in both cases in order to enhance the closed-loop performance and robustness of classical integer order IMC controllers. The tuning procedures are exemplified for both single-input-single-output as well as multivariable processes, described by first-order and second-order transfer functions with time delays. Different numerical examples are provided, including a general multivariable time delay process. Integer order IMC controllers are designed in each case, as well as fractional-order IMC controllers. The simulation results show that the proposed fractional-order IMC controller ensures an increased robustness to modelling uncertainties. Experimental results are also provided, for the design of a multivariable fractional-order IMC controller in a Smith predictor structure for a quadruple-tank system.

  3. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm

    PubMed Central

    Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît

    2016-01-01

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  4. Online adaptive policy learning algorithm for H∞ state feedback control of unknown affine nonlinear discrete-time systems.

    PubMed

    Zhang, Huaguang; Qin, Chunbin; Jiang, Bin; Luo, Yanhong

    2014-12-01

    The problem of H∞ state feedback control of affine nonlinear discrete-time systems with unknown dynamics is investigated in this paper. An online adaptive policy learning algorithm (APLA) based on adaptive dynamic programming (ADP) is proposed for learning in real-time the solution to the Hamilton-Jacobi-Isaacs (HJI) equation, which appears in the H∞ control problem. In the proposed algorithm, three neural networks (NNs) are utilized to find suitable approximations of the optimal value function and the saddle point feedback control and disturbance policies. Novel weight updating laws are given to tune the critic, actor, and disturbance NNs simultaneously by using data generated in real-time along the system trajectories. Considering NN approximation errors, we provide the stability analysis of the proposed algorithm with Lyapunov approach. Moreover, the need of the system input dynamics for the proposed algorithm is relaxed by using a NN identification scheme. Finally, simulation examples show the effectiveness of the proposed algorithm. PMID:25095274

  5. The KM-Algorithm Identifies Regulated Genes in Time Series Expression Data

    PubMed Central

    Bremer, Martina; Doerge, R. W.

    2009-01-01

    We present a statistical method to rank observed genes in gene expression time series experiments according to their degree of regulation in a biological process. The ranking may be used to focus on specific genes or to select meaningful subsets of genes from which gene regulatory networks can be built. Our approach is based on a state space model that incorporates hidden regulators of gene expression. Kalman (K) smoothing and maximum (M) likelihood estimation techniques are used to derive optimal estimates of the model parameters upon which a proposed regulation criterion is based. The statistical power of the proposed algorithm is investigated, and a real data set is analyzed for the purpose of identifying regulated genes in time dependent gene expression data. This statistical approach supports the concept that meaningful biological conclusions can be drawn from gene expression time series experiments by focusing on strong regulation rather than large expression values. PMID:19956417

  6. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    PubMed

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications. PMID:22356947

  7. Real-time high-resolution downsampling algorithm on many-core processor for spatially scalable video coding

    NASA Astrophysics Data System (ADS)

    Buhari, Adamu Muhammad; Ling, Huo-Chong; Baskaran, Vishnu Monn; Wong, KokSheik

    2015-01-01

    The progression toward spatially scalable video coding (SVC) solutions for ubiquitous endpoint systems introduces challenges to sustain real-time frame rates in downsampling high-resolution videos into multiple layers. In addressing these challenges, we put forward a hardware accelerated downsampling algorithm on a parallel computing platform. First, we investigate the principal architecture of a serial downsampling algorithm in the Joint-Scalable-Video-Model reference software to identify the performance limitations for spatially SVC. Then, a parallel multicore-based downsampling algorithm is studied as a benchmark. Experimental results for this algorithm using an 8-core processor exhibit performance speedup of 5.25× against the serial algorithm in downsampling a quantum extended graphics array at 1536p video resolution into three lower resolution layers (i.e., Full-HD at 1080p, HD at 720p, and Quarter-HD at 540p). However, the achieved speedup here does not translate into the minimum required frame rate of 15 frames per second (fps) for real-time video processing. To improve the speedup, a many-core based downsampling algorithm using the compute unified device architecture parallel computing platform is proposed. The proposed algorithm increases the performance speedup to 26.14× against the serial algorithm. Crucially, the proposed algorithm exceeds the target frame rate of 15 fps, which in turn is advantageous to the overall performance of the video encoding process.

  8. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  9. Parallel algorithm of real-time infrared image restoration based on total variation theory

    NASA Astrophysics Data System (ADS)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  10. Optimizing the decomposition of soil moisture time-series data using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Kulkarni, C.; Mengshoel, O. J.; Basak, A.; Schmidt, K. M.

    2015-12-01

    The task of determining near-surface volumetric water content (VWC), using commonly available dielectric sensors (based upon capacitance or frequency domain technology), is made challenging due to the presence of "noise" such as temperature-driven diurnal variations in the recorded data. We analyzed a post-wildfire rainfall and runoff monitoring dataset for hazard studies in Southern California. VWC was measured with EC-5 sensors manufactured by Decagon Devices. Many traditional signal smoothing techniques such as moving averages, splines, and Loess smoothing exist. Unfortunately, when applied to our post-wildfire dataset, these techniques diminish maxima, introduce time shifts, and diminish signal details. A promising seasonal trend-decomposition procedure based on Loess (STL) decomposes VWC time series into trend, seasonality, and remainder components. Unfortunately, STL with its default parameters produces similar results as previously mentioned smoothing methods. We propose a novel method to optimize seasonal decomposition using STL with genetic algorithms. This method successfully reduces "noise" including diurnal variations while preserving maxima, minima, and signal detail. Better decomposition results for the post-wildfire VWC dataset were achieved by optimizing STL's control parameters using genetic algorithms. The genetic algorithms minimize an additive objective function with three weighted terms: (i) root mean squared error (RMSE) of straight line relative to STL trend line; (ii) range of STL remainder; and (iii) variance of STL remainder. Our optimized STL method, combining trend and remainder, provides an improved representation of signal details by preserving maxima and minima as compared to the traditional smoothing techniques for the post-wildfire rainfall and runoff monitoring data. This method identifies short- and long-term VWC seasonality and provides trend and remainder data suitable for forecasting VWC in response to precipitation.

  11. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Astrophysics Data System (ADS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  12. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  13. Toward a Deterministic Polynomial Time Algorithm with Optimal Additive Query Complexity

    NASA Astrophysics Data System (ADS)

    Bshouty, Nader H.; Mazzawi, Hanna

    In this paper, we study two combinatorial search problems: The coin weighing problem with a spring scale (also known as the vector reconstructing problem using additive queries) and the problem of reconstructing weighted graphs using additive queries. Suppose we are given n identical looking coins. Suppose that m out of the n coins are counterfeit and the rest are authentic. Assume that we are allowed to weigh subsets of coins with a spring scale. It is known that the optimal number of weighing for identifying the counterfeit coins and their weights is at least Ω(mlog n/log m). We give a deterministic polynomial time adaptive algorithm for identifying the counterfeit coins and their weights using O(mlog n/log m+ mlog log m) weighings, assuming that the weight of the counterfeit coins are greater than the weight of the authentic coin. This algorithm is optimal when m ≤ n c/loglogn , where c is any constant. Also our weighing complexity is within loglogm times the optimal complexity for all m.

  14. Time scheduling of transit systems with transfer considerations using genetic algorithms.

    PubMed

    Deb, K; Chakroborty, P

    1998-01-01

    Scheduling of a bus transit system must be formulated as an optimization problem, if the level of service to passengers is to be maximized within the available resources. In this paper, we present a formulation of a transit system scheduling problem with the objective of minimizing the overall waiting time of transferring and nontransferring passengers while satisfying a number of resource- and service-related constraints. It is observed that the number of variables and constraints for even a simple transit system (a single bus station with three routes) is too large to tackle using classical mixed-integer optimization techniques. The paper shows that genetic algorithms (GAs) are ideal for these problems, mainly because they (i) naturally handle binary variables, thereby taking care of transfer decision variables, which constitute the majority of the decision variables in the transit scheduling problem; and (ii) allow procedure-based declarations, thereby allowing complex algorithmic approaches (involving if then-else conditions) to be handled easily. The paper also shows how easily the same GA procedure with minimal modifications can handle a number of other more pragmatic extensions to the simple transit scheduling problem: buses with limited capacity, buses that do not arrive exactly as per scheduled times, and a multiple-station transit system having common routes among bus stations. Simulation results show the success of GAs in all these problems and suggest the application of GAs in more complex scheduling problems. PMID:10021738

  15. Designing genetic algorithm for efficient calculation of value encoding in time-lapse gravity inversion

    NASA Astrophysics Data System (ADS)

    Wahyudi, Eko Januari

    2013-09-01

    As advancing application of soft computation technique in oil and gas industry, Genetic Algorithm (GA) also shows contribution in geophysical inverse problems in order to achieve better results and efficiency in computational process. In this paper, I would like to show the progress of my work in inverse modeling of time-lapse gravity data uses value encoding with alphabet formulation. The alphabet formulation designed to provide solution of characterization positive density change (+Δρ) and negative density change (-Δρ) respect to reference value (0 gr/cc). The inversion that utilize discrete model parameter, computed with GA as optimization algorithm. The challenge working with GA is take long time computational process, so the step in designing GA in this paper described through evaluation on GA operators performance test. The performances of several combinations of GA operators (selection, crossover, mutation, and replacement) tested with synthetic model in single-layer reservoir. Analysis on sufficient number of samples shows combination of SUS-MPCO-QSA/G-ND as the most promising results. Quantitative solution with more confidence level to characterize sharp boundary of density change zones was conducted with average calculation of sufficient model samples.

  16. Performance study of a new time-delay estimation algorithm in ultrasonic echo signals and ultrasound elastography.

    PubMed

    Shaswary, Elyas; Xu, Yuan; Tavakkoli, Jahan

    2016-07-01

    Time-delay estimation has countless applications in ultrasound medical imaging. Previously, we proposed a new time-delay estimation algorithm, which was based on the summation of the sign function to compute the time-delay estimate (Shaswary et al., 2015). We reported that the proposed algorithm performs similar to normalized cross-correlation (NCC) and sum squared differences (SSD) algorithms, even though it was significantly more computationally efficient. In this paper, we study the performance of the proposed algorithm using statistical analysis and image quality analysis in ultrasound elastography imaging. Field II simulation software was used for generation of ultrasound radio frequency (RF) echo signals for statistical analysis, and a clinical ultrasound scanner (Sonix® RP scanner, Ultrasonix Medical Corp., Richmond, BC, Canada) was used to scan a commercial ultrasound elastography tissue-mimicking phantom for image quality analysis. The statistical analysis results confirmed that, in overall, the proposed algorithm has similar performance compared to NCC and SSD algorithms. The image quality analysis results indicated that the proposed algorithm produces strain images with marginally higher signal-to-noise and contrast-to-noise ratios compared to NCC and SSD algorithms. PMID:27010697

  17. Development of a Near-Real Time Hail Damage Swath Identification Algorithm for Vegetation

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Lori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    The Midwest is home to one of the world's largest agricultural growing regions. Between the time period of late May through early September, and with irrigation and seasonal rainfall these crops are able to reach their full maturity. Using moderate to high resolution remote sensors, the monitoring of the vegetation can be achieved using the red and near-infrared wavelengths. These wavelengths allow for the calculation of vegetation indices, such as Normalized Difference Vegetation Index (NDVI). The vegetation growth and greenness, in this region, grows and evolves uniformly as the growing season progresses. However one of the biggest threats to Midwest vegetation during the time period is thunderstorms that bring large hail and damaging winds. Hail and wind damage to crops can be very expensive to crop growers and, damage can be spread over long swaths associated with the tracks of the damaging storms. Damage to the vegetation can be apparent in remotely sensed imagery and is visible from space after storms slightly damage the crops, allowing for changes to occur slowly over time as the crops wilt or more readily apparent if the storms strip material from the crops or destroy them completely. Previous work on identifying these hail damage swaths used manual interpretation by the way of moderate and higher resolution satellite imagery. With the development of an automated and near-real time hail swath damage identification algorithm, detection can be improved, and more damage indicators be created in a faster and more efficient way. The automated detection of hail damage swaths will examine short-term, large changes in the vegetation by differencing near-real time eight day NDVI composites and comparing them to post storm imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua and Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi NPP. In addition land surface temperatures from these instruments will be examined as

  18. Wireless acoustic modules for real-time data fusion using asynchronous sniper localization algorithms

    NASA Astrophysics Data System (ADS)

    Hengy, S.; De Mezzo, S.; Duffner, P.; Naz, P.

    2012-11-01

    The presence of snipers in modern conflicts leads to high insecurity for the soldiers. In order to improve the soldier's protection against this threat, the French German Research Institute of Saint-Louis (ISL) has been conducting studies in the domain of acoustic localization of shots. Mobile antennas mounted on the soldier's helmet were initially used for real-time detection, classification and localization of sniper shots. It showed good performances in land scenarios, but also in urban scenarios if the array was in the shot corridor, meaning that the microphones first detect the direct wave and then the reflections of the Mach and muzzle waves (15% distance estimation error compared to the actual shooter array distance). Fusing data sent by multiple sensor nodes distributed on the field showed some of the limitations of the technologies that have been implemented in ISL's demonstrators. Among others, the determination of the arrays' orientation was not accurate enough, thereby degrading the performance of data fusion. Some new solutions have been developed in the past year in order to obtain better performance for data fusion. Asynchronous localization algorithms have been developed and post-processed on data measured in both free-field and urban environments with acoustic modules on the line of sight of the shooter. These results are presented in the first part of the paper. The impact of GPS position estimation error is also discussed in the article in order to evaluate the possible use of those algorithms for real-time processing using mobile acoustic nodes. In the frame of ISL's transverse project IMOTEP (IMprovement Of optical and acoustical TEchnologies for the Protection), some demonstrators are developed that will allow real-time asynchronous localization of sniper shots. An embedded detection and classification algorithm is implemented on wireless acoustic modules that send the relevant information to a central PC. Data fusion is then processed and the

  19. Unit Template Synchronous Reference Frame Theory Based Control Algorithm for DSTATCOM

    NASA Astrophysics Data System (ADS)

    Bangarraju, J.; Rajagopal, V.; Jayalaxmi, A.

    2014-04-01

    This article proposes new and simplified unit templates instead of standard phase locked loop (PLL) for Synchronous Reference Frame Theory Control Algorithm (SRFT). The extraction of synchronizing components (sinθ and cosθ) for parks and inverse parks transformation using standard PLL takes more execution time. This execution time in control algorithm delays the extraction of reference source current generation. The standard PLL not only takes more execution time but also increases the reactive power burden on the Distributed Static Compensator (DSTATCOM). This work proposes a unit template based SRFT control algorithm for four-leg insulated gate bipolar transistor based voltage source converter for DSTATCOM in distribution systems. This will reduce the execution time and reactive power burden on the DSTATCOM. The proposed DSTATCOM suppress harmonics, regulates the terminal voltage along with neutral current compensation. The DSTATCOM in distribution systems with proposed control algorithm is modeled and simulated using MATLAB using SIMULINK and Simpower systems toolboxes.

  20. Relation between the extended time-delayed feedback control algorithm and the method of harmonic oscillators.

    PubMed

    Pyragas, Viktoras; Pyragas, Kestutis

    2015-08-01

    In a recent paper [Phys. Rev. E 91, 012920 (2015)] Olyaei and Wu have proposed a new chaos control method in which a target periodic orbit is approximated by a system of harmonic oscillators. We consider an application of such a controller to single-input single-output systems in the limit of an infinite number of oscillators. By evaluating the transfer function in this limit, we show that this controller transforms into the known extended time-delayed feedback controller. This finding gives rise to an approximate finite-dimensional theory of the extended time-delayed feedback control algorithm, which provides a simple method for estimating the leading Floquet exponents of controlled orbits. Numerical demonstrations are presented for the chaotic Rössler, Duffing, and Lorenz systems as well as the normal form of the Hopf bifurcation. PMID:26382493

  1. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    SciTech Connect

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erences in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.

  2. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    DOE PAGESBeta

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less

  3. Real-time combined heat and power operational strategy using a hierarchical optimization algorithm

    SciTech Connect

    Yun, K.; Cho, H.; Luck, R.; Mago, P. J.

    2011-06-01

    Existing attempts to optimize the operation of combined heat and power (CHP) systems for building applications have two major limitations: the electrical and thermal loads are obtained from historical weather profiles; and the CHP system models ignore transient responses by using constant equipment efficiencies. This article considers the transient response of a building combined with a hierarchical CHP optimal control algorithm to obtain a real-time integrated system that uses the most recent weather and electric load information. This is accomplished by running concurrent simulations of two transient building models. The first transient building model uses current as well as forecast input information to obtain short-term predictions of the thermal and electric building loads. The predictions are then used by an optimization algorithm (i.e. a hierarchical controller that decides the amount of fuel and of electrical energy to be allocated at the current time step). In a simulation, the actual physical building is not available and, hence, to simulate a real-time environment, a second, building model with similar but not identical input loads are used to represent the actual building. A state-variable feedback loop is completed at the beginning of each time step by copying (i.e. measuring, the state variable from the actual building and restarting the predictive model using these ‘measured’ values as initial conditions). The simulation environment presented in this article features non-linear effects such as the dependence of the heat exchanger effectiveness on their operating conditions. Finally, the results indicate that the CHP engine operation dictated by the proposed hierarchical controller with uncertain weather conditions has the potential to yield significant savings when compared with conventional systems using current values of electricity and fuel prices.

  4. A Real-Time and Closed-Loop Control Algorithm for Cascaded Multilevel Inverter Based on Artificial Neural Network

    PubMed Central

    Wang, Libing; Mao, Chengxiong; Wang, Dan; Lu, Jiming; Zhang, Junfeng; Chen, Xun

    2014-01-01

    In order to control the cascaded H-bridges (CHB) converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN) for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC) algorithm is employed to minimize the total harmonic distortion (THD) and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC) sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current's THD (<5%) when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness. PMID:24772025

  5. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms

    PubMed Central

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.

    2015-01-01

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406

  6. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms.

    PubMed

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M K

    2015-01-01

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406

  7. AN ALGORITHM FOR RADIATION MAGNETOHYDRODYNAMICS BASED ON SOLVING THE TIME-DEPENDENT TRANSFER EQUATION

    SciTech Connect

    Jiang, Yan-Fei; Stone, James M.; Davis, Shane W.

    2014-07-01

    We describe a new algorithm for solving the coupled frequency-integrated transfer equation and the equations of magnetohydrodynamics in the regime that light-crossing time is only marginally shorter than dynamical timescales. The transfer equation is solved in the mixed frame, including velocity-dependent source terms accurate to O(v/c). An operator split approach is used to compute the specific intensity along discrete rays, with upwind monotonic interpolation used along each ray to update the transport terms, and implicit methods used to compute the scattering and absorption source terms. Conservative differencing is used for the transport terms, which ensures the specific intensity (as well as energy and momentum) are conserved along each ray to round-off error. The use of implicit methods for the source terms ensures the method is stable even if the source terms are very stiff. To couple the solution of the transfer equation to the MHD algorithms in the ATHENA code, we perform direct quadrature of the specific intensity over angles to compute the energy and momentum source terms. We present the results of a variety of tests of the method, such as calculating the structure of a non-LTE atmosphere, an advective diffusion test, linear wave convergence tests, and the well-known shadow test. We use new semi-analytic solutions for radiation modified shocks to demonstrate the ability of our algorithm to capture the effects of an anisotropic radiation field accurately. Since the method uses explicit differencing of the spatial operators, it shows excellent weak scaling on parallel computers.

  8. Application of Genetic Algorithm to Predict Optimal Sowing Region and Timing for Kentucky Bluegrass in China

    PubMed Central

    Peng, Tingting; Jiang, Bo; Guo, Jiangfeng; Lu, Hongfei; Du, Liqun

    2015-01-01

    Temperature is a predominant environmental factor affecting grass germination and distribution. Various thermal-germination models for prediction of grass seed germination have been reported, in which the relationship between temperature and germination were defined with kernel functions, such as quadratic or quintic function. However, their prediction accuracies warrant further improvements. The purpose of this study is to evaluate the relative prediction accuracies of genetic algorithm (GA) models, which are automatically parameterized with observed germination data. The seeds of five P. pratensis (Kentucky bluegrass, KB) cultivars were germinated under 36 day/night temperature regimes ranging from 5/5 to 40/40°C with 5°C increments. Results showed that optimal germination percentages of all five tested KB cultivars were observed under a fluctuating temperature regime of 20/25°C. Meanwhile, the constant temperature regimes (e.g., 5/5, 10/10, 15/15°C, etc.) suppressed the germination of all five cultivars. Furthermore, the back propagation artificial neural network (BP-ANN) algorithm was integrated to optimize temperature-germination response models from these observed germination data. It was found that integrations of GA-BP-ANN (back propagation aided genetic algorithm artificial neural network) significantly reduced the Root Mean Square Error (RMSE) values from 0.21~0.23 to 0.02~0.09. In an effort to provide a more reliable prediction of optimum sowing time for the tested KB cultivars in various regions in the country, the optimized GA-BP-ANN models were applied to map spatial and temporal germination percentages of blue grass cultivars in China. Our results demonstrate that the GA-BP-ANN model is a convenient and reliable option for constructing thermal-germination response models since it automates model parameterization and has excellent prediction accuracy. PMID:26154163

  9. A lightweight messaging-based distributed processing and workflow execution framework for real-time and big data analysis

    NASA Astrophysics Data System (ADS)

    Laban, Shaban; El-Desouky, Aly

    2014-05-01

    To achieve a rapid, simple and reliable parallel processing of different types of tasks and big data processing on any compute cluster, a lightweight messaging-based distributed applications processing and workflow execution framework model is proposed. The framework is based on Apache ActiveMQ and Simple (or Streaming) Text Oriented Message Protocol (STOMP). ActiveMQ , a popular and powerful open source persistence messaging and integration patterns server with scheduler capabilities, acts as a message broker in the framework. STOMP provides an interoperable wire format that allows framework programs to talk and interact between each other and ActiveMQ easily. In order to efficiently use the message broker a unified message and topic naming pattern is utilized to achieve the required operation. Only three Python programs and simple library, used to unify and simplify the implementation of activeMQ and STOMP protocol, are needed to use the framework. A watchdog program is used to monitor, remove, add, start and stop any machine and/or its different tasks when necessary. For every machine a dedicated one and only one zoo keeper program is used to start different functions or tasks, stompShell program, needed for executing the user required workflow. The stompShell instances are used to execute any workflow jobs based on received message. A well-defined, simple and flexible message structure, based on JavaScript Object Notation (JSON), is used to build any complex workflow systems. Also, JSON format is used in configuration, communication between machines and programs. The framework is platform independent. Although, the framework is built using Python the actual workflow programs or jobs can be implemented by any programming language. The generic framework can be used in small national data centres for processing seismological and radionuclide data received from the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear

  10. LateBiclustering: Efficient Heuristic Algorithm for Time-Lagged Bicluster Identification.

    PubMed

    Gonçalves, Joana P; Madeira, Sara C

    2014-01-01

    Identifying patterns in temporal data is key to uncover meaningful relationships in diverse domains, from stock trading to social interactions. Also of great interest are clinical and biological applications, namely monitoring patient response to treatment or characterizing activity at the molecular level. In biology, researchers seek to gain insight into gene functions and dynamics of biological processes, as well as potential perturbations of these leading to disease, through the study of patterns emerging from gene expression time series. Clustering can group genes exhibiting similar expression profiles, but focuses on global patterns denoting rather broad, unspecific responses. Biclustering reveals local patterns, which more naturally capture the intricate collaboration between biological players, particularly under a temporal setting. Despite the general biclustering formulation being NP-hard, considering specific properties of time series has led to efficient solutions for the discovery of temporally aligned patterns. Notably, the identification of biclusters with time-lagged patterns, suggestive of transcriptional cascades, remains a challenge due to the combinatorial explosion of delayed occurrences. Herein, we propose LateBiclustering, a sensible heuristic algorithm enabling a polynomial rather than exponential time solution for the problem. We show that it identifies meaningful time-lagged biclusters relevant to the response of Saccharomyces cerevisiae to heat stress. PMID:26356854

  11. Time integration algorithms for the two-dimensional Euler equations on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Slack, David C.; Whitaker, D. L.; Walters, Robert W.

    1994-01-01

    Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.

  12. The development of a near-real time hail damage swath identification algorithm for vegetation

    NASA Astrophysics Data System (ADS)

    Bell, Jordan R.

    The central United States is primarily covered in agricultural lands with a growing season that peaks during the same time as the region's climatological maximum for severe weather. These severe thunderstorms can bring large hail that can cause extensive areas of crop damage, which can be difficult to survey from the ground. Satellite remote sensing can help with the identification of these damaged areas. This study examined three techniques for identifying damage using satellite imagery that could be used in the development of a near-real time algorithm formulated for the detection of damage to agriculture caused by hail. The three techniques: a short term Normalized Difference Vegetation Index (NDVI) change product, a modified Vegetation Health Index (mVHI) that incorporates both NDVI and land surface temperature (LST), and a feature detection technique based on NDVI and LST anomalies were tested on a single training case and five case studies. Skill scores were computed for each of the techniques during the training case and each case study. Among the best-performing case studies, the probability of detection (POD) for the techniques ranged from 0.527 - 0.742. Greater skill was noted for environments that occurred later in the growing season over areas where the land cover was consistently one or two types of uniform vegetation. The techniques struggled in environments where the land cover was not able to provide uniform vegetation, resulting in POD of 0.067 - 0.223. The feature detection technique was selected to be used for the near-real-time algorithm, based on the consistent performance throughout the entire growing season.

  13. Decomposition Algorithm for Global Reachability on a Time-Varying Graph

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2010-01-01

    A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.

  14. Edgelist phase unwrapping algorithm for time series InSAR analysis.

    PubMed

    Shanker, A Piyush; Zebker, Howard

    2010-03-01

    We present here a new integer programming formulation for phase unwrapping of multidimensional data. Phase unwrapping is a key problem in many coherent imaging systems, including time series synthetic aperture radar interferometry (InSAR), with two spatial and one temporal data dimensions. The minimum cost flow (MCF) [IEEE Trans. Geosci. Remote Sens. 36, 813 (1998)] phase unwrapping algorithm describes a global cost minimization problem involving flow between phase residues computed over closed loops. Here we replace closed loops by reliable edges as the basic construct, thus leading to the name "edgelist." Our algorithm has several advantages over current methods-it simplifies the representation of multidimensional phase unwrapping, it incorporates data from external sources, such as GPS, where available to better constrain the unwrapped solution, and it treats regularly sampled or sparsely sampled data alike. It thus is particularly applicable to time series InSAR, where data are often irregularly spaced in time and individual interferograms can be corrupted with large decorrelated regions. We show that, similar to the MCF network problem, the edgelist formulation also exhibits total unimodularity, which enables us to solve the integer program problem by using efficient linear programming tools. We apply our method to a persistent scatterer-InSAR data set from the creeping section of the Central San Andreas Fault and find that the average creep rate of 22 mm/Yr is constant within 3 mm/Yr over 1992-2004 but varies systematically with ground location, with a slightly higher rate in 1992-1998 than in 1999-2003. PMID:20208954

  15. Framework and algorithms for illustrative visualizations of time-varying flows on unstructured meshes

    DOE PAGESBeta

    Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; Garimella, Srinivas

    2016-03-17

    Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaViewmore » plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.« less

  16. Polynomial-time algorithms for the integer minimal principle for centrosymmetric structures.

    PubMed

    Vaia, Anastasia; Sahinidis, Nikolaos V

    2005-07-01

    The minimal principle for structure determination from single-crystal X-ray diffraction measurements has recently been formulated as an integer linear optimization model for the case of centrosymmetric structures. Solution of this model via established combinatorial branch-and-bound algorithms provides the true global minimum of the minimal principle while operating exclusively in reciprocal space. However, integer programming techniques may require an exponential number of iterations to exhaust the search space. In this paper, a new approach is developed to solve the integer minimal principle to global optimality without requiring the solution of an optimization problem. Instead, properties of the solution of the optimization problem, as observed in a large number of computational experiments, are exploited in order to reduce the optimization formulation to a system of linear equations in the number field of two elements (F(2)). Two specialized Gaussian elimination algorithms are then developed to solve this system of equations in polynomial time in the number of atoms. Computational results on a collection of 38 structures demonstrate that the proposed approach provides very fast and accurate solutions to the phase problem for centrosymmetric structures. This approach also provided much better crystallographic R values than SHELXS for all 38 structures tested. PMID:15972998

  17. Generalized Framework and Algorithms for Illustrative Visualization of Time-Varying Data on Unstructured Meshes

    SciTech Connect

    Alexander S. Rattner; Donna Post Guillen; Alark Joshi

    2012-12-01

    Photo- and physically-realistic techniques are often insufficient for visualization of simulation results, especially for 3D and time-varying datasets. Substantial research efforts have been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. While these efforts have yielded valuable visualization results, a great deal of work has been reproduced in studies as individual research groups often develop purpose-built platforms. Additionally, interoperability between illustrative visualization software is limited due to specialized processing and rendering architectures employed in different studies. In this investigation, a generalized framework for illustrative visualization is proposed, and implemented in marmotViz, a ParaView plugin, enabling its use on variety of computing platforms with various data file formats and mesh geometries. Detailed descriptions of the region-of-interest identification and feature-tracking algorithms incorporated into this tool are provided. Additionally, implementations of multiple illustrative effect algorithms are presented to demonstrate the use and flexibility of this framework. By providing a framework and useful underlying functionality, the marmotViz tool can act as a springboard for future research in the field of illustrative visualization.

  18. A new time dependent density functional algorithm for large systems and plasmons in metal clusters

    SciTech Connect

    Baseggio, Oscar; Fronzoni, Giovanna; Stener, Mauro

    2015-07-14

    A new algorithm to solve the Time Dependent Density Functional Theory (TDDFT) equations in the space of the density fitting auxiliary basis set has been developed and implemented. The method extracts the spectrum from the imaginary part of the polarizability at any given photon energy, avoiding the bottleneck of Davidson diagonalization. The original idea which made the present scheme very efficient consists in the simplification of the double sum over occupied-virtual pairs in the definition of the dielectric susceptibility, allowing an easy calculation of such matrix as a linear combination of constant matrices with photon energy dependent coefficients. The method has been applied to very different systems in nature and size (from H{sub 2} to [Au{sub 147}]{sup −}). In all cases, the maximum deviations found for the excitation energies with respect to the Amsterdam density functional code are below 0.2 eV. The new algorithm has the merit not only to calculate the spectrum at whichever photon energy but also to allow a deep analysis of the results, in terms of transition contribution maps, Jacob plasmon scaling factor, and induced density analysis, which have been all implemented.

  19. Refinement and evaluation of helicopter real-time self-adaptive active vibration controller algorithms

    NASA Technical Reports Server (NTRS)

    Davis, M. W.

    1984-01-01

    A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.

  20. Algorithms for near real-time detection of gas leaks from buried pipelines using hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Hoffmann, G. D.; Silver, E. A.; Pickles, W.; Male, E.

    2009-12-01

    Gas leaks from buried pipelines can directly impact the health of overlying vegetation. The leak can produce patches of highly stressed or dead vegetation. Plant health can be assessed remotely by measuring the depth of the chlorophyll absorption, which is located between 550 nm and 700 nm in reflectance imagery. Chlorophyll absorption is readily recognizable in multispectral and hyperspectral imagery as a strong absorption band centered on red light (typically 680 nm wavelength). We have examined several methods of measuring chlorophyll absorption with the goal of automating vegetation stress detection above underground pipelines in order to facilitate same-day detection of potential pipeline leak locations. One method, in which we measure vegetation stress as the ratio of the measured reflectance at peak absorption to the spectral continuum, was particularly successful. We compare the results of this measurement with a manual analysis of 0.18 m resolution imagery of several controlled CO2 leaks, finding the automatic analysis to be robust. High spatial resolution is shown to greatly increase the quality of the results, however, we show that this method works in even 3 m resolution imagery of an underground pipeline methane leak. This algorithm runs very quickly for large images. We are developing the image analysis algorithm to operate in real-time while flying buried pipeline right of way with hyperspectral sensors.

  1. An asymptotic-preserving Lagrangian algorithm for the time-dependent anisotropic heat transport equation

    SciTech Connect

    Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.

    2014-09-01

    We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while the second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X /X becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L2/X1L2 → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.

  2. A sensitive algorithm for automatic detection of space-time alternating signals in cardiac tissue

    PubMed Central

    Bien, Harold; Entcheva, Emilia

    2011-01-01

    Alternans, a beat-to-beat alternation in cardiac signals, may serve as a precursor to lethal cardiac arrhythmias, including ventricular tachycardia and ventricular fibrillation. Therefore, alternans is a desirable target of early arrhythmia prediction/detection. For long-term records and in the presence of noise, the definition of alternans is qualitative and ambiguous. This makes their automatic detection in large spatiotemporal data sets almost impossible. We present here a quantitative combinatorics-derived definition of alternans in the presence of random noise and a novel algorithm for automatic alternans detection using criteria like temporal persistence (TP), representative phase (RP) and alternans ratio (AR). This technique is validated by comparison to theoretically-derived probabilities and by test data sets with white noise. Finally, the algorithm is applied to ultra-high resolution optical mapping data from cultured cell monolayers, exhibiting calcium alternans. Early fine-scale alternans, close to the noise level, were revealed and linked to the later formation of larger regions and evolution of spatially discordant alternans (SDA). This robust new technique can be useful in quantification and better understanding of the onset of arrhythmias and in general analysis of space-time alternating signals. PMID:19162616

  3. A generic probability based algorithm to derive regional patterns of crops in time and space

    NASA Astrophysics Data System (ADS)

    Wattenbach, Martin; Oijen, Marcel v.; Leip, Adrian; Hutchings, Nick; Balkovic, Juraj; Smith, Pete

    2013-04-01

    Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy partitioning, influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. However, at a given point of time the pattern of crops in a landscape is not only determined by environmental and socioeconomic conditions but also by the compatibility to the crops which had been grown in the years before at the current field and its surrounding cropping area. The crop compatibility is driven by factors like pests and diseases, crop driven changes in soil structure and timing of cultivation steps. Given these effects of crops on the biochemical cycle and their interdependence with the mentioned boundary conditions, there is a demand in the regional and global modelling community to account for these regional patterns. Here we present a Bayesian crop distribution generator algorithm that is used to calculate the combined and conditional probability for a crop to appear in time and space using sparse and disparate information. The input information to define the most probable crop per year and grid cell is based on combined probabilities derived from the a crop transition matrix representing good agricultural practice, crop specific soil suitability derived from the European soil database and statistical information about harvested area from the Eurostat database. The reported Eurostat crop area also provides the target proportion to be matched by the algorithm on the level of administrative units (Nomenclature des Unités Territoriales Statistiques - NUTS). The algorithm is applied for the EU27 to derive regional spatial and

  4. Comparison and calibration of a real-time virtual stenting algorithm using Finite Element Analysis and Genetic Algorithms

    PubMed Central

    Spranger, K.; Capelli, C.; Bosi, G.M.; Schievano, S.; Ventikos, Y.

    2015-01-01

    In this paper, we perform a comparative analysis between two computational methods for virtual stent deployment: a novel fast virtual stenting method, which is based on a spring–mass model, is compared with detailed finite element analysis in a sequence of in silico experiments. Given the results of the initial comparison, we present a way to optimise the fast method by calibrating a set of parameters with the help of a genetic algorithm, which utilises the outcomes of the finite element analysis as a learning reference. As a result of the calibration phase, we were able to substantially reduce the force measure discrepancy between the two methods and validate the fast stenting method by assessing the differences in the final device configurations. PMID:26664007

  5. Real-time dynamics simulation of the Cassini spacecraft using DARTS. Part 1: Functional capabilities and the spatial algebra algorithm

    NASA Technical Reports Server (NTRS)

    Jain, A.; Man, G. K.

    1993-01-01

    This paper describes the Dynamics Algorithms for Real-Time Simulation (DARTS) real-time hardware-in-the-loop dynamics simulator for the National Aeronautics and Space Administration's Cassini spacecraft. The spacecraft model consists of a central flexible body with a number of articulated rigid-body appendages. The demanding performance requirements from the spacecraft control system require the use of a high fidelity simulator for control system design and testing. The DARTS algorithm provides a new algorithmic and hardware approach to the solution of this hardware-in-the-loop simulation problem. It is based upon the efficient spatial algebra dynamics for flexible multibody systems. A parallel and vectorized version of this algorithm is implemented on a low-cost, multiprocessor computer to meet the simulation timing requirements.

  6. Time-domain filtered-x-Newton narrowband algorithms for active isolation of frequency-fluctuating vibration

    NASA Astrophysics Data System (ADS)

    Li, Yan; He, Lin; Shuai, Chang-geng; Wang, Fei

    2016-04-01

    A time-domain filtered-x Newton narrowband algorithm (the Fx-Newton algorithm) is proposed to address three major problems in active isolation of machinery vibration: multiple narrowband components, MIMO coupling, and amplitude and frequency fluctuations. In this algorithm, narrowband components are extracted by narrowband-pass filters (NBPF) and independently controlled by multi-controllers, and fast convergence of the control algorithm is achieved by inverse secondary-path filtering of the extracted sinusoidal reference signal and its orthogonal component using L×L numbers of 2nd-order filters in the time domain. Controller adapting and control signal generation are also implemented in the time domain, to ensure good real-time performance. The phase shift caused by narrowband filter is compensated online to improve the robustness of control system to frequency fluctuations. A double-reference Fx-Newton algorithm is also proposed to control double sinusoids in the same frequency band, under the precondition of acquiring two independent reference signals. Experiments are conducted with an MIMO single-deck vibration isolation system on which a 200 kW ship diesel generator is mounted, and the algorithms are tested under the vibration alternately excited by the diesel generator and inertial shakers. The results of control over sinusoidal vibration excited by inertial shakers suggest that the Fx-Newton algorithm with NBPF have much faster convergence rate and better attenuation effect than the Fx-LMS algorithm. For swept, frequency-jumping, double, double frequency-swept and double frequency-jumping sinusoidal vibration, and multiple high-level harmonics in broadband vibration excited by the diesel generator, the proposed algorithms also demonstrate large vibration suppression at fast convergence rate, and good robustness to vibration with frequency fluctuations.

  7. Wireless transmission of neuronal recordings using a portable real-time discrimination/compression algorithm.

    PubMed

    Goh, Aik; Craciun, Stefan; Rao, Sudhir; Cheney, David; Gugel, Karl; Sanchez, Justin C; Principe, Jose C

    2008-01-01

    A design challenge of portable wireless neural recording systems is the tradeoff between bandwidth and power consumption. This paper investigates the compression of neuronal recordings in real-time using a novel discriminating Linde-Buzo-Gray algorithm (DLBG) that preserves spike shapes while filtering background noise. The technique is implemented in a low power digital signal processor (DSP) which is capable of wirelessly transmitting raw neuronal recordings. Depending on the signal to noise ratio of the recording, the compression ratio can be tailored to the data to maximally preserve power and bandwidth. The approach was tested in real and synthetic data and achieved compression ratios between 184:1 and 10:1. PMID:19163699

  8. A time-accurate algorithm for chemical non-equilibrium viscous flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, J.-S.; Chen, K.-H.; Choi, Y.

    1992-01-01

    A time-accurate, coupled solution procedure is described for the chemical nonequilibrium Navier-Stokes equations over a wide range of Mach numbers. This method employs the strong conservation form of the governing equations, but uses primitive variables as unknowns. Real gas properties and equilibrium chemistry are considered. Numerical tests include steady convergent-divergent nozzle flows with air dissociation/recombination chemistry, dump combustor flows with n-pentane-air chemistry, nonreacting flow in a model double annular combustor, and nonreacting unsteady driven cavity flows. Numerical results for both the steady and unsteady flows demonstrate the efficiency and robustness of the present algorithm for Mach numbers ranging from the incompressible limit to supersonic speeds.

  9. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  10. Program for the analysis of time series. [by means of fast Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Brown, T. J.; Brown, C. G.; Hardin, J. C.

    1974-01-01

    A digital computer program for the Fourier analysis of discrete time data is described. The program was designed to handle multiple channels of digitized data on general purpose computer systems. It is written, primarily, in a version of FORTRAN 2 currently in use on CDC 6000 series computers. Some small portions are written in CDC COMPASS, an assembler level code. However, functional descriptions of these portions are provided so that the program may be adapted for use on any facility possessing a FORTRAN compiler and random-access capability. Properly formatted digital data are windowed and analyzed by means of a fast Fourier transform algorithm to generate the following functions: (1) auto and/or cross power spectra, (2) autocorrelations and/or cross correlations, (3) Fourier coefficients, (4) coherence functions, (5) transfer functions, and (6) histograms.

  11. A New MANET wormhole detection algorithm based on traversal time and hop count analysis.

    PubMed

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2011-01-01

    As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability. PMID:22247657

  12. A New MANET Wormhole Detection Algorithm Based on Traversal Time and Hop Count Analysis

    PubMed Central

    Karlsson, Jonny; Dooley, Laurence S.; Pulkkis, Göran

    2011-01-01

    As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability. PMID:22247657

  13. Real-time infrared gas detection based on an adaptive Savitzky-Golay algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jingsong; Deng, Hao; Li, Pengfei; Yu, Benli

    2015-08-01

    Based on the Savitzky-Golay filter, we have developed in the present study a simple but robust method for real-time processing of tunable diode laser absorption spectroscopy (TDLAS) signals. Our method was developed to resolve the blindness of selecting the input filter parameters and to mitigate potential signal distortion induced in digital signal processing. Application of the developed adaptive Savitzky-Golay filter algorithm to the simulated and experimentally observed signals and comparison with the wavelet-based de-noising technique indicate that the newly developed method is effective in obtaining high-quality TDLAS data for a wide variety of applications including atmospheric environmental monitoring and industrial processing control.

  14. Note: Ultrasonic gas flowmeter based on optimized time-of-flight algorithms

    NASA Astrophysics Data System (ADS)

    Wang, X. F.; Tang, Z. A.

    2011-04-01

    A new digital signal processor based single path ultrasonic gas flowmeter is designed, constructed, and experimentally tested. To achieve high accuracy measurements, an optimized ultrasound driven method of incorporation of the amplitude modulation and the phase modulation of the transmit-receive technique is used to stimulate the transmitter. Based on the regularities among the received envelope zero-crossings, different received signal's signal-to-noise ratio situations are discriminated and optional time-of-flight algorithms are applied to take flow rate calculations. Experimental results from the dry calibration indicate that the designed flowmeter prototype can meet the zero-flow verification test requirements of the American Gas Association Report No. 9. Furthermore, the results derived from the flow calibration prove that the proposed flowmeter prototype can measure flow rate accurately in the practical experiments, and the nominal accuracies after FWME adjustment are lower than 0.8% throughout the calibration range.

  15. Note: ultrasonic gas flowmeter based on optimized time-of-flight algorithms.

    PubMed

    Wang, X F; Tang, Z A

    2011-04-01

    A new digital signal processor based single path ultrasonic gas flowmeter is designed, constructed, and experimentally tested. To achieve high accuracy measurements, an optimized ultrasound driven method of incorporation of the amplitude modulation and the phase modulation of the transmit-receive technique is used to stimulate the transmitter. Based on the regularities among the received envelope zero-crossings, different received signal's signal-to-noise ratio situations are discriminated and optional time-of-flight algorithms are applied to take flow rate calculations. Experimental results from the dry calibration indicate that the designed flowmeter prototype can meet the zero-flow verification test requirements of the American Gas Association Report No. 9. Furthermore, the results derived from the flow calibration prove that the proposed flowmeter prototype can measure flow rate accurately in the practical experiments, and the nominal accuracies after FWME adjustment are lower than 0.8% throughout the calibration range. PMID:21529053

  16. Development of a Multiview Time Domain Imaging Algorithm (MTDI) with a Fermat Correction

    SciTech Connect

    Fisher, K A; Lehman, S K; Chambers, D H

    2004-09-22

    An imaging algorithm is presented based on the standard assumption that the total scattered field can be separated into an elastic component with monopole like dependence and an inertial component with a dipole like dependence. The resulting inversion generates two separate image maps corresponding to the monopole and dipole terms of the forward model. The complexity of imaging flaws and defects in layered elastic media is further compounded by the existence of high contrast gradients in either sound speed and/or density from layer to layer. To compensate for these gradients, we have incorporated Fermat's method of least time into our forward model to determine the appropriate delays between individual source-receiver pairs. Preliminary numerical and experimental results are in good agreement with each other.

  17. An approximation polynomial-time algorithm for a sequence bi-clustering problem

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Khamidullin, S. A.

    2015-06-01

    We consider a strongly NP-hard problem of partitioning a finite sequence of vectors in Euclidean space into two clusters using the criterion of the minimal sum of the squared distances from the elements of the clusters to the centers of the clusters. The center of one of the clusters is to be optimized and is determined as the mean value over all vectors in this cluster. The center of the other cluster is fixed at the origin. Moreover, the partition is such that the difference between the indices of two successive vectors in the first cluster is bounded above and below by prescribed constants. A 2-approximation polynomial-time algorithm is proposed for this problem.

  18. Motor Execution Affects Action Prediction

    ERIC Educational Resources Information Center

    Springer, Anne; Brandstadter, Simone; Liepelt, Roman; Birngruber, Teresa; Giese, Martin; Mechsner, Franz; Prinz, Wolfgang

    2011-01-01

    Previous studies provided evidence of the claim that the prediction of occluded action involves real-time simulation. We report two experiments that aimed to study how real-time simulation is affected by simultaneous action execution under conditions of full, partial or no overlap between observed and executed actions. This overlap was analysed by…

  19. Design of SPARC V8 superscalar pipeline applied Tomasulo's algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Yu, Lixin; Feng, Yunkai

    2014-04-01

    A superscalar pipeline applied Tomasulo's algorithm is presented in this paper. The design begins with a dual-issue superscalar processor based on LEON2. Tomasulo's algorithm is adopted to implement out-of-order execution. Instructions are separated into three different parts and executed by three different function units so as to reduce area and promote execution speed. Results wrote back into registers are still in program order, for the aim of ensure the function veracity. Mechanisms of the reservation station, common data bus, and reorder buffer are presented in detail. The structure can sends and executes three instructions at most at a time. Branch prediction can also be realized by reorder buffer. Performance of the scalar pipeline applied Tomasulo's algorithm is promoted by 41.31% compared to singleissue pipeline..

  20. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  1. A Power Grid Optimization Algorithm by Observing Timing Error Risk by IR Drop

    NASA Astrophysics Data System (ADS)

    Kawakami, Yoshiyuki; Terao, Makoto; Fukui, Masahiro; Tsukiyama, Shuji

    With the advent of the deep submicron age, circuit performance is strongly impacted by process variations and the influence on the circuit delay to the power-supply voltage increases more and more due to CMOS feature size shrinkage. Power grid optimization which considers the timing error risk caused by the variations and IR drop becomes very important for stable and hi-speed operation of system-on-chip. Conventionally, a lot of power grid optimization algorithms have been proposed, and most of them use IR drop as their object functions. However, the IR drop is an indirect metric and we suspect that it is vague metric for the real goal of LSI design. In this paper, first, we propose an approach which uses the “timing error risk caused by IR drop” as a direct objective function. Second, the critical path map is introduced to express the existence of critical paths distributed in the entire chip. The timing error risk is decreased by using the critical path map and the new objective function. Some experimental results show the effectiveness.

  2. Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems.

    PubMed

    Liu, Derong; Wei, Qinglai

    2014-03-01

    This paper is concerned with a new discrete-time policy iteration adaptive dynamic programming (ADP) method for solving the infinite horizon optimal control problem of nonlinear systems. The idea is to use an iterative ADP technique to obtain the iterative control law, which optimizes the iterative performance index function. The main contribution of this paper is to analyze the convergence and stability properties of policy iteration method for discrete-time nonlinear systems for the first time. It shows that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control law, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, the numerical results and analysis are presented to illustrate the performance of the developed method. PMID:24807455

  3. Robust evaluation of time series classification algorithms for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.

    2014-03-01

    Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.

  4. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  5. A new finite element formulation for computational fluid dynamics. IX - Fourier analysis of space-time Galerkin/least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Shakib, Farzin; Hughes, Thomas J. R.

    1991-01-01

    A Fourier stability and accuracy analysis of the space-time Galerkin/least-squares method as applied to a time-dependent advective-diffusive model problem is presented. Two time discretizations are studied: a constant-in-time approximation and a linear-in-time approximation. Corresponding space-time predictor multi-corrector algorithms are also derived and studied. The behavior of the space-time algorithms is compared to algorithms based on semidiscrete formulations.

  6. Design of an IRFPA nonuniformity correction algorithm to be implemented as a real-time hardware prototype

    NASA Astrophysics Data System (ADS)

    Fenner, Jonathan W.; Simon, Solomon H.; Eden, Dayton D.

    1994-07-01

    As new IR focal plane array (IRFPA) technologies become available, improved methods for coping with array errors must be developed. Traditional methods of nonuniformity correction using simple calibration mode are not adequate to compensate for the inherent nonuniformity and 1/f noise in some arrays. In an effort to compensate for nonuniformity in a HgCdTe IRFPA, and to reduce the effects of 1/f noise over a time interval, a new dynamic neural network (NN) based algorithm was implemented. The algorithm compensates for nonuniformities, and corrects for 1/f noise. A gradient descent algorithm is used with nearest neighbor feedback for training, creating a dynamic model of the IRFPA's gains and offsets, then updating and correcting them continuously. Improvements to the NN include implementation on a IBM 486 computer system, and a close examination of simulated scenes to test the algorithms boundaries. Preliminary designs for a real-time hardware prototype have been developed as well. Simulations were implemented to test the algorithm's ability to correct under a variety of conditions. A wide range of background noise, 1/f noise, object intensities, and background intensities were used. Results indicate that this algorithm can correct efficiently down to the background noise. Our conclusions are that NN based adaptive algorithms will supplement the effectiveness of IRFPA's.

  7. Phase spectrum algorithm for correction of time distortion in a wavelength demultiplexing analog-to-digital converter

    NASA Astrophysics Data System (ADS)

    Fu, Xin; Zhang, Hongming; Yao, Minyu

    2010-05-01

    An algorithm based on phase spectrum analysis is proposed that can be used to correct the timing distortion between the multiple parallel demultiplexed post-sampling pulse trains in wavelength demultiplexing analog-to-digital converters. The algorithm is theoretically presented and its operational principle is explained. The algorithm is then applied to two parallel demultiplexed post-sampling signals from a proof-of-principle system and fairly good results are obtained. This algorithm is potentially applicable in other opto-electronic hybrid systems where an interleaving and/or multiplexing mechanism is utilized, such as optical time-division multiplexing and optical clock division systems, photonic arbitrary waveform generators, and so on.

  8. A Real-Time Atrial Fibrillation Detection Algorithm Based on the Instantaneous State of Heart Rate

    PubMed Central

    Zhou, Xiaolin; Ding, Hongxia; Wu, Wanqing; Zhang, Yuanting

    2015-01-01

    Atrial fibrillation (AF), the most frequent cause of cardioembolic stroke, is increasing in prevalence as the population ages, and presents with a broad spectrum of symptoms and severity. The early identification of AF is an essential part for preventing the possibility of blood clotting and stroke. In this work, a real-time algorithm is proposed for accurately screening AF episodes in electrocardiograms. This method adopts heart rate sequence, and it involves the application of symbolic dynamics and Shannon entropy. Using novel recursive algorithms, a low-computational complexity can be obtained. Four publicly-accessible sets of clinical data (Long-Term AF, MIT-BIH AF, MIT-BIH Arrhythmia, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. The first database was selected as a training set; the receiver operating characteristic (ROC) curve was performed, and the best performance was achieved at the threshold of 0.639: the sensitivity (Se), specificity (Sp), positive predictive value (PPV) and overall accuracy (ACC) were 96.14%, 95.73%, 97.03% and 95.97%, respectively. The other three databases were used for independent testing. Using the obtained decision-making threshold (i.e., 0.639), for the second set, the obtained parameters were 97.37%, 98.44%, 97.89% and 97.99%, respectively; for the third database, these parameters were 97.83%, 87.41%, 47.67% and 88.51%, respectively; the Sp was 99.68% for the fourth set. The latest methods were also employed for comparison. Collectively, results presented in this study indicate that the combination of symbolic dynamics and Shannon entropy yields a potent AF detector, and suggest this method could be of practical use in both clinical and out-of-clinical settings. PMID:26376341

  9. Testing a real-time algorithm for the detection of tsunami signals on sea-level records

    NASA Astrophysics Data System (ADS)

    Bressan, L.; Tinti, S.; Titov, V.

    2009-04-01

    One of the important tasks for the implementation of a tsunami warning system in the Mediterranean Sea is to develop a real-time detection algorithm. Unlike the Mediterranean Sea situation, tsunamis happen quite often in the Pacific Ocean and they have been historically recorded with a proper sampling rate. A large database of tsunami records is therefore available for the Pacific. The Tsunami Research Team of the University of Bologna is developing a real-time detection algorithm on synthetic records. Thanks to the collaboration with NCTR of PMEL/NOAA (NOAA Center for Tsunami Research of Pacific and Marine Environmental Laboratory/National Oceanic and Atmospheric Administration), it has been possible to test this algorithm on specific events recorded by Adak Island tide-gage, in Alaska, and by DART buoys, located offshore Alaska. This work has been undertaken in the framework of the Italian national project DPC-INGV S3. The detection algorithm has the goal to discriminate the first tsunami wave from the previous background signal. Shortly, the algorithm is built on a parameter based on the standard deviation of the signal calculated on a short time window and on its comparison with its computed prediction through a control function. The control function indicates a tsunami detection whenever it exceeds a certain threshold. The algorithm was calibrated and tested both on coastal tide-gages and on offshore buoys that measure sea-level changes. Its calibration presents different issues if the algorithm has to be implemented on an offshore buoy or on a coastal tide-gage. In particular, the algorithm parameters are site-specific for coastal sea-level signals, because sea-level changes are here mainly characterized by oscillations induced by the coastal topography. Adak Island background signal was analyzed and the algorithm parameters were set: It was found that there is a persistent presence of seiches with periods in the tsunami range, to which the algorithm is also

  10. An online algorithm for least-square spectral analysis: Applied to time-frequency analysis of heart rate.

    PubMed

    Zhang, Zhe; Leong, Philip H W

    2015-08-01

    We propose a novel online algorithm for computing least-square based periodograms, otherwise known as the Lomb-Scargle Periodogram. Our spectral analysis technique has been shown to be superior to traditional discrete Fourier transform (DFT) based methods, and we introduce an algorithm which has O(N) time complexity per input sample. The technique is suitable for real-time embedded implementations and its utility is demonstrated through an application to the high resolution time-frequency domain analysis of heart rate variability (HRV). PMID:26736732

  11. Use of NTRIP for Optimizing the Decoding Algorithm for Real-Time Data Streams

    PubMed Central

    He, Zhanke; Tang, Wenda; Yang, Xuhai; Wang, Liming; Liu, Jihua

    2014-01-01

    As a network transmission protocol, Networked Transport of RTCM via Internet Protocol (NTRIP) is widely used in GPS and Global Orbiting Navigational Satellite System (GLONASS) Augmentation systems, such as Continuous Operational Reference System (CORS), Wide Area Augmentation System (WAAS) and Satellite Based Augmentation Systems (SBAS). With the deployment of BeiDou Navigation Satellite system (BDS) to serve the Asia-Pacific region, there are increasing needs for ground monitoring of the BeiDou Navigation Satellite system and the development of the high-precision real-time BeiDou products. This paper aims to optimize the decoding algorithm of NTRIP Client data streams and the user authentication strategies of the NTRIP Caster based on NTRIP. The proposed method greatly enhances the handling efficiency and significantly reduces the data transmission delay compared with the Federal Agency for Cartography and Geodesy (BKG) NTRIP. Meanwhile, a transcoding method is proposed to facilitate the data transformation from the BINary EXchange (BINEX) format to the RTCM format. The transformation scheme thus solves the problem of handing real-time data streams from Trimble receivers in the BeiDou Navigation Satellite System indigenously developed by China. PMID:25310474

  12. Use of NTRIP for optimizing the decoding algorithm for real-time data streams.

    PubMed

    He, Zhanke; Tang, Wenda; Yang, Xuhai; Wang, Liming; Liu, Jihua

    2014-01-01

    As a network transmission protocol, Networked Transport of RTCM via Internet Protocol (NTRIP) is widely used in GPS and Global Orbiting Navigational Satellite System (GLONASS) Augmentation systems, such as Continuous Operational Reference System (CORS), Wide Area Augmentation System (WAAS) and Satellite Based Augmentation Systems (SBAS). With the deployment of BeiDou Navigation Satellite system(BDS) to serve the Asia-Pacific region, there are increasing needs for ground monitoring of the BeiDou Navigation Satellite system and the development of the high-precision real-time BeiDou products. This paper aims to optimize the decoding algorithm of NTRIP Client data streams and the user authentication strategies of the NTRIP Caster based on NTRIP. The proposed method greatly enhances the handling efficiency and significantly reduces the data transmission delay compared with the Federal Agency for Cartography and Geodesy (BKG) NTRIP. Meanwhile, a transcoding method is proposed to facilitate the data transformation from the BINary EXchange (BINEX) format to the RTCM format. The transformation scheme thus solves the problem of handing real-time data streams from Trimble receivers in the BeiDou Navigation Satellite System indigenously developed by China. PMID:25310474

  13. Massively Parallel and Scalable Implicit Time Integration Algorithms for Structural Dynamics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1997-01-01

    Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because of the following additional facts: (a) explicit schemes are easier to parallelize than implicit ones, and (b) explicit schemes induce short range interprocessor communications that are relatively inexpensive, while the factorization methods used in most implicit schemes induce long range interprocessor communications that often ruin the sought-after speed-up. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet be offset by the speed of the currently available parallel hardware. Therefore, it is essential to develop efficient alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating the low-frequency dynamics of aerospace structures.

  14. Improved radar data processing algorithms for quantitative rainfall estimation in real time.

    PubMed

    Krämer, S; Verworn, H R

    2009-01-01

    This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time. PMID:19587415

  15. A class of staggered grid algorithms and analysis for time-domain Maxwell systems

    NASA Astrophysics Data System (ADS)

    Charlesworth, Alexander E.

    We describe, implement, and analyze a class of staggered grid algorithms for efficient simulation and analysis of time-domain Maxwell systems in the case of heterogeneous, conductive, and nondispersive, isotropic, linear media. We provide the derivation of a continuous mathematical model from the Maxwell equations in vacuum; however, the complexity of this system necessitates the use of computational methods for approximately solving for the physical unknowns. The finite difference approximation has been used for partial differential equations and the Maxwell Equations in particular for many years. We develop staggered grid based finite difference discrete operators as a class of approximations to continuous operators based on second order in time and various order approximations to the electric and magnetic field at staggered grid locations. A generalized parameterized operator which can be specified to any of this class of discrete operators is then applied to the Maxwell system and hence we develop discrete approximations through various choices of parameters in the approximation. We describe analysis of the resulting discrete system as an approximation to the continuous system. Using the comparison of dispersion analysis for the discrete and continuous systems, we derive a third difference approximation, in addition to the known (2, 2) and (2, 4) schemes. We conclude by providing the comparison of these three methods by simulating the Maxwell system for several choices of parameters in the system.

  16. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  17. Real-Time Simulation for Verification and Validation of Diagnostic and Prognostic Algorithms

    NASA Technical Reports Server (NTRS)

    Aguilar, Robet; Luu, Chuong; Santi, Louis M.; Sowers, T. Shane

    2005-01-01

    To verify that a health management system (HMS) performs as expected, a virtual system simulation capability, including interaction with the associated platform or vehicle, very likely will need to be developed. The rationale for developing this capability is discussed and includes the limited capability to seed faults into the actual target system due to the risk of potential damage to high value hardware. The capability envisioned would accurately reproduce the propagation of a fault or failure as observed by sensors located at strategic locations on and around the target system and would also accurately reproduce the control system and vehicle response. In this way, HMS operation can be exercised over a broad range of conditions to verify that it meets requirements for accurate, timely response to actual faults with adequate margin against false and missed detections. An overview is also presented of a real-time rocket propulsion health management system laboratory which is available for future rocket engine programs. The health management elements and approaches of this lab are directly applicable for future space systems. In this paper the various components are discussed and the general fault detection, diagnosis, isolation and the response (FDIR) concept is presented. Additionally, the complexities of V&V (Verification and Validation) for advanced algorithms and the simulation capabilities required to meet the changing state-of-the-art in HMS are discussed.

  18. Offset time decision (OTD) algorithm for guaranteeing the requested QoS of high priority traffic in OBS networks

    NASA Astrophysics Data System (ADS)

    So, Won-Ho; Cha, Yun-Ho; Roh, Sun-Sik; Kim, Young-Chon

    2001-10-01

    In this paper, we propose the Offset Time Decision (OTD) algorithm for supporting the QoS in optical networks based on Optical Burst Switching (OBS), which is the new switching paradigm, and evaluate the performance of the OTD algorithm. The proposed algorithm can decide a reasonable offset time to guarantee the Burst Loss Rate (BLR) of high priority traffic by considering traffic load of network and the number of wavelengths. In order to design this effective OTD algorithm, firstly we illustrate the new burst loss formula, which includes the effect of offset time of high priority class. As the decision of offset time corresponding to the requested BLR, however, should use the reversed formula of new one, we are not able to use it without any changes. Thus, we define the Heuristic Loss Formula (HLF) that is based on the new burst loss formula and the proportional equation considering its characteristics. Finally we show the OTD algorithm to decide the reasonable offset time by using HLF. The simulation result shows that the requested BLR of high priority traffic is guaranteed under various traffic load.

  19. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  20. Executable Code Recognition in Network Flows Using Instruction Transition Probabilities

    NASA Astrophysics Data System (ADS)

    Kim, Ikkyun; Kang, Koohong; Choi, Yangseo; Kim, Daewon; Oh, Jintae; Jang, Jongsoo; Han, Kijun

    The ability to recognize quickly inside network flows to be executable is prerequisite for malware detection. For this purpose, we introduce an instruction transition probability matrix (ITPX) which is comprised of the IA-32 instruction sets and reveals the characteristics of executable code's instruction transition patterns. And then, we propose a simple algorithm to detect executable code inside network flows using a reference ITPX which is learned from the known Windows Portable Executable files. We have tested the algorithm with more than thousands of executable and non-executable codes. The results show that it is very promising enough to use in real world.

  1. A near real time MSG-SEVIRI based algorithm for gas flaring monitoring

    NASA Astrophysics Data System (ADS)

    Faruolo, Mariapia; Coviello, Irina; Filizzola, Carolina; Lacava, Teodosio; Pergola, Nicola; Tramutoli, Valerio

    2015-04-01

    In the last decades oil and gas industry has become responsible for important environmental issues. The gas flaring, one of the processes used to dispose of the natural gas associated with extracted crude oil, has been recognized as being potentially harmful to human health and the atmosphere. Efforts to empirically assess the environmental impacts of such phenomenon are frequently hampered by limited access to official information on flare locations and volumes, the heterogeneity in spatial and temporal sampling strategies and methods used to collect data. Consequently, there is a need to develop new methods of acquiring such information and remote sensing techniques seem the most viable option. In this paper, with reference to this problem, the potential of a satellite based technique for a near real time detection and characterization of hot spot sources was assessed. In detail, Medium Infrared (MIR) radiances acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) scanner carried aboard the Meteosat Second Generation (MSG) satellite were processed following the Robust Satellite Techniques (RST) prescriptions. Such an algorithm, based on the processing of multi-year satellite images, co-located in the space-time domain, allows to timely identify statistically significant variations of the MIR signal, related to changes and/or malfunctions in the industrial process and responsible for the gas flaring blazes. Results achieved, referring to the flaring activity of the Centro Olio Val d'Agri (COVA), an oil/gas plant located in the South of Italy, will be described in detail and discussed in this paper.

  2. Television and children's executive function.

    PubMed

    Lillard, Angeline S; Li, Hui; Boguszewski, Katie

    2015-01-01

    Children spend a lot of time watching television on its many platforms: directly, online, and via videos and DVDs. Many researchers are concerned that some types of television content appear to negatively influence children's executive function. Because (1) executive function predicts key developmental outcomes, (2) executive function appears to be influenced by some television content, and (3) American children watch large quantities of television (including the content of concern), the issues discussed here comprise a crucial public health issue. Further research is needed to reveal exactly what television content is implicated, what underlies television's effect on executive function, how long the effect lasts, and who is affected. PMID:25735946

  3. Finite-element time-domain algorithms for modeling linear Debye and Lorentz dielectric dispersions at low frequencies.

    PubMed

    Stoykov, Nikolay S; Kuiken, Todd A; Lowery, Madeleine M; Taflove, Allen

    2003-09-01

    We present what we believe to be the first algorithms that use a simple scalar-potential formulation to model linear Debye and Lorentz dielectric dispersions at low frequencies in the context of finite-element time-domain (FETD) numerical solutions of electric potential. The new algorithms, which permit treatment of multiple-pole dielectric relaxations, are based on the auxiliary differential equation method and are unconditionally stable. We validate the algorithms by comparison with the results of a previously reported method based on the Fourier transform. The new algorithms should be useful in calculating the transient response of biological materials subject to impulsive excitation. Potential applications include FETD modeling of electromyography, functional electrical stimulation, defibrillation, and effects of lightning and impulsive electric shock. PMID:12943277

  4. A new angular resampling algorithm for the bearing fault diagnosis under the time-varying rotational speed

    NASA Astrophysics Data System (ADS)

    Wang, Tianyang; Cheng, Weidong; Li, Jianvong; Chu, Fulei

    2015-07-01

    Order tracking is one of the most effective algorithms to eliminate the effect of time-varying rotational speed on the rotary machines. However, this algorithm is not suitable for the faulty rolling bearing unless the peak time of the fault-induced impulse is set as zero which cannot be met in the real engineering. The traditional resampling process will cause uneven intervals between the adjacent impulse peaks in the angular domain and then affect the envelope analysis-based diagnosis result. To solve this problem, a new resampling algorithm with three parts is proposed: (a) linearly fitting the instantaneous rotational speed measured by the tachometer, (b) resampling the vibration signal from the time domain to the angular domain using the traditional method, (c) calculating the envelope deformation amount and then compensating the resampled result. The effectiveness of the proposed method has been validated by both the simulated and experimental bearing vibration signals.

  5. Novel Algorithms Enabling Rapid, Real-Time Earthquake Monitoring and Tsunami Early Warning Worldwide

    NASA Astrophysics Data System (ADS)

    Lomax, A.; Michelini, A.

    2012-12-01

    We have introduced recently new methods to determine rapidly the tsunami potential and magnitude of large earthquakes (e.g., Lomax and Michelini, 2009ab, 2011, 2012). To validate these methods we have implemented them along with other new algorithms within the Early-est earthquake monitor at INGV-Rome (http://early-est.rm.ingv.it, http://early-est.alomax.net). Early-est is a lightweight software package for real-time earthquake monitoring (including phase picking, phase association and event detection, location, magnitude determination, first-motion mechanism determination, ...), and for tsunami early warning based on discriminants for earthquake tsunami potential. In a simulation using archived broadband seismograms for the devastating M9, 2011 Tohoku earthquake and tsunami, Early-est determines: the epicenter within 3 min after the event origin time, discriminants showing very high tsunami potential within 5-7 min, and magnitude Mwpd(RT) 9.0-9.2 and a correct shallow-thrusting mechanism within 8 min. Real-time monitoring with Early-est givess similar results for most large earthquakes using currently available, real-time seismogram data. Here we summarize some of the key algorithms within Early-est that enable rapid, real-time earthquake monitoring and tsunami early warning worldwide: >>> FilterPicker - a general purpose, broad-band, phase detector and picker (http://alomax.net/FilterPicker); >>> Robust, simultaneous association and location using a probabilistic, global-search; >>> Period-duration discriminants TdT0 and TdT50Ex for tsunami potential available within 5 min; >>> Mwpd(RT) magnitude for very large earthquakes available within 10 min; >>> Waveform P polarities determined on broad-band displacement traces, focal mechanisms obtained with the HASH program (Hardebeck and Shearer, 2002); >>> SeisGramWeb - a portable-device ready seismogram viewer using web-services in a browser (http://alomax.net/webtools/sgweb/info.html). References (see also: http

  6. A Discussion of the Discrete Fourier Transform Execution on a Typical Desktop PC

    NASA Technical Reports Server (NTRS)

    White, Michael J.

    2006-01-01

    This paper will discuss and compare the execution times of three examples of the Discrete Fourier Transform (DFT). The first two examples will demonstrate the direct implementation of the algorithm. In the first example, the Fourier coefficients are generated at the execution of the DFT. In the second example, the coefficients are generated prior to execution and the DFT coefficients are indexed at execution. The last example will demonstrate the Cooley- Tukey algorithm, better known as the Fast Fourier Transform. All examples were written in C executed on a PC using a Pentium 4 running at 1.7 Ghz. As a function of N, the total complex data size, the direct implementation DFT executes, as expected at order of N2 and the FFT executes at order of N log2 N. At N=16K, there is an increase in processing time beyond what is expected. This is not caused by implementation but is a consequence of the effect that machine architecture and memory hierarchy has on implementation. This paper will include a brief overview of digital signal processing, along with a discussion of contemporary work with discrete Fourier processing.

  7. Executive summary

    NASA Technical Reports Server (NTRS)

    Ayon, Juan A.

    1992-01-01

    The Astrotech 21 Optical Systems Technology Workshop was held in Pasadena, California on March 6-8, 1991. The purpose of the workshop was to examine the state of Optical Systems Technology at the National Aeronautics Space Administration (NASA), and in industry and academia, in view of the potential Astrophysics mission set currently being considered for the late 1990's through the first quarter of the 21st century. The principal result of the workshop is this publication, which contains an assessment of the current state of the technology, and specific technology advances in six critical areas of optics, all necessary for the mission set. The workshop was divided into six panels, each of about a dozen experts in specific fields, representing NASA, industry, and academia. In addition, each panel contained expertise that spanned the spectrum from x-ray to submillimeter wavelengths. This executive summary contains the principal recommendations of each panel. The six technology panels and their chairs were: (1) Wavefront Sensing, Control, and Pointing, Thomas Pitts, Itek Optical Systems, A Division of Litton; (2) Fabrication, Roger Angel, Steward Observatory, University of Arizona; (3) Materials and Structures, Theodore Saito, Lawrence Livermore National Laboratory; (4) Optical Testing, James Wyant, WYKO Corporation; (5) Optical Systems Integrated Modeling, Robert R. Shannon, Optical Sciences Center, University of Arizona; and (6) Advanced Optical Instruments Technology, Michael Shao, Jet Propulsion Laboratory, California Institute of Technology. This Executive Summary contains the principal recommendations of each panel.

  8. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  9. Time and wavelength domain algorithms for chemical analysis by laser radar

    NASA Technical Reports Server (NTRS)

    Rosen, David L.; Gillespie, James B.

    1992-01-01

    Laser-induced fluorescence (LIF) is a promising technique for laser radar applications. Laser radar using LIF has already been applied to algae blooms and oil slicks. Laser radar using LIF has great potential for remote chemical analysis because LIF spectra are extremely sensitive to chemical composition. However, most samples in the real world contain mixtures of fluorescing components, not merely individual components. Multicomponent analysis of laser radar returns from mixtures is often difficult because LIF spectra from solids and liquids are very broad and devoid of line structure. Therefore, algorithms for interpreting LIF spectra from laser radar returns must be able to analyze spectra that overlap in multicomponent systems. This paper analyzes the possibility of using factor analysis-rank annihilation (FARA) to analyze emission-time matrices (ETM) from laser radar returns instead of excitation-emission matrices (EEM). The authors here define ETM as matrices where the rows (or columns) are emission spectra at fixed times and the columns (or rows) are temporal profiles for fixed emission wavelengths. Laser radar usually uses pulsed lasers for ranging purposes, which are suitable for measuring temporal profiles. Laser radar targets are hard instead of diffuse; that is, a definite surface emits the fluorescence instead of an extended volume. A hard target would not broaden the temporal profiles as would a diffuse target. Both fluorescence lifetimes and emission spectra are sensitive to chemical composition. Therefore, temporal profiles can be used instead of excitation spectra in FARA analysis of laser radar returns. The resulting laser radar returns would be ETM instead of EEM.

  10. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  11. Performance evaluation of gratings applied by genetic algorithm for the real-time optical interconnection

    NASA Astrophysics Data System (ADS)

    Yoon, Jin-Seon; Kim, Nam; Suh, HoHyung; Jeon, Seok Hee

    2000-03-01

    In this paper, gratings to apply for the optical interconnection are designed using a genetic algorithm (GA) for a robust and efficient schema. The real-time optical interconnection system architecture is composed with LC-SLM, CCD array detector, IBM-PC, He-Ne laser, and Fourier transform lens. A pixelated binary phase grating is displayed on LC-SLM and could interconnect incoming beams to desired output spots freely by real-time. So as to adapt a GA for finding near globally-cost solutions, a chromosome is coded as a binary integer of length 32 X 32, the stochastic tournament method for decreasing the stochastic sampling error is performed, and a single-point crossover having 16 X 16 block size is used. The characteristics on the several parameters are analyzed in the desired grating design. Firstly, as the analysis of the effect on the probability of crossover, a designed grating when the probability of crossover is 0.75 has a 74.7[%] high diffraction efficiency and a 1.73 X 10-1 uniformity quantitatively, where the probability of mutation is 0.001 and the population size is 300. Secondly, on the probability of mutation, a designed grating when the probability of mutation is 0.001 has a 74.4[%] high efficiency and a 1.61 X 10-1 uniformity quantitatively, where the probability of crossover is 1.0 and the population size is 300. Thirdly, on the population size, a designed grating when the population size is 300 and the generation is 400 has above 74[%] diffraction efficiency, where the probability of mutation is 0.001 and the probability of crossover is 1.0.

  12. Handling time-expensive global optimization problems through the surrogate-enhanced evolutionary annealing-simplex algorithm

    NASA Astrophysics Data System (ADS)

    Tsoukalas, Ioannis; Kossieris, Panagiotis; Efstratiadis, Andreas; Makropoulos, Christos

    2015-04-01

    In water resources optimization problems, the calculation of the objective function usually presumes to first run a simulation model and then evaluate its outputs. In several cases, however, long simulation times may pose significant barriers to the optimization procedure. Often, to obtain a solution within a reasonable time, the user has to substantially restrict the allowable number of function evaluations, thus terminating the search much earlier than required by the problem's complexity. A promising novel strategy to address these shortcomings is the use of surrogate modelling techniques within global optimization algorithms. Here we introduce the Surrogate-Enhanced Evolutionary Annealing-Simplex (SE-EAS) algorithm that couples the strengths of surrogate modelling with the effectiveness and efficiency of the EAS method. The algorithm combines three different optimization approaches (evolutionary search, simulated annealing and the downhill simplex search scheme), in which key decisions are partially guided by numerical approximations of the objective function. The performance of the proposed algorithm is benchmarked against other surrogate-assisted algorithms, in both theoretical and practical applications (i.e. test functions and hydrological calibration problems, respectively), within a limited budget of trials (from 100 to 1000). Results reveal the significant potential of using SE-EAS in challenging optimization problems, involving time-consuming simulations.

  13. Implementation of a Phase Detection Algorithm for Dynamic Cardiac Computed Tomography Analysis Based on Time Dependent Contrast Agent Distribution

    PubMed Central

    Kendziorra, Carsten; Meyer, Henning; Dewey, Marc

    2014-01-01

    This paper presents a phase detection algorithm for four-dimensional (4D) cardiac computed tomography (CT) analysis. The algorithm detects a phase, i.e. a specific three-dimensional (3D) image out of several time-distributed 3D images, with high contrast in the left ventricle and low contrast in the right ventricle. The purpose is to use the automatically detected phase in an existing algorithm that automatically aligns the images along the heart axis. Decision making is based on the contrast agent distribution over time. It was implemented in KardioPerfusion – a software framework currently being developed for 4D CT myocardial perfusion analysis. Agreement of the phase detection algorithm with two reference readers was 97% (95% CI: 82–100%). Mean duration for detection was 0.020 s (95% CI: 0.018–0.022 s), which was times less than the readers needed (s, ). Thus, this algorithm is an accurate and fast tool that can improve work flow of clinical examinations. PMID:25545863

  14. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  15. AP Selection Algorithm for Real-Time Communications through Mixed WLAN Environments

    NASA Astrophysics Data System (ADS)

    Morioka, Yasufumi; Higashino, Takeshi; Tsukamoto, Katsutoshi; Komaki, Shozo

    Recent rapid development of high-speed wireless access technologies has created mixed WLAN (Wireless LAN) environments where QoS capable APs coexist with legacy APs. To provide QoS guarantee in this mixed WLAN environment, this paper proposes a new AP selection algorithm. The proposed algorithm assigns an STA (Station) to an AP in the overall WLAN service area. Simulation results show improvement in the VoIP performance in terms of an eMOS (estimated Mean Opinion Score) value and the FTP throughput compared to conventional algorithms.

  16. Development of a Near Real-Time Hail Damage Swath Identification Algorithm for Vegetation

    NASA Technical Reports Server (NTRS)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Kori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    Every year in the Midwest and Great Plains, widespread greenness forms in conjunction with the latter part of the spring-summer growing season. This prevalent greenness forms as a result of the high concentration of agricultural areas having their crops reach their maturity before the fall harvest. This time of year also coincides with an enhanced hail frequency for the Great Plains (Cintineo et al. 2012). These severe thunderstorms can bring damaging winds and large hail that can result in damage to the surface vegetation. The spatial extent of the damage can relatively small concentrated area or be a vast swath of damage that is visible from space. These large areas of damage have been well documented over the years. In the late 1960s aerial photography was used to evaluate crop damage caused by hail. As satellite remote sensing technology has evolved, the identification of these hail damage streaks has increased. Satellites have made it possible to view these streaks in additional spectrums. Parker et al. (2005) documented two streaks using the Moderate Resolution Imaging Spectroradiometer (MODIS) that occurred in South Dakota. He noted the potential impact that these streaks had on the surface temperature and associated surface fluxes that are impacted by a change in temperature. Gallo et al. (2012) examined at the correlation between radar signatures and ground observations from storms that produced a hail damage swath in Central Iowa also using MODIS. Finally, Molthan et al. (2013) identified hail damage streaks through MODIS, Landsat-7, and SPOT observations of different resolutions for the development of a potential near-real time applications. The manual analysis of hail damage streaks in satellite imagery is both tedious and time consuming, and may be inconsistent from event to event. This study focuses on development of an objective and automatic algorithm to detect these areas of damage in a more efficient and timely manner. This study utilizes the

  17. Implementation of the phase gradient algorithm

    SciTech Connect

    Wahl, D.E.; Eichel, P.H.; Jakowatz, C.V. Jr.

    1990-01-01

    The recently introduced Phase Gradient Autofocus (PGA) algorithm is a non-parametric autofocus technique which has been shown to be quite effective for phase correction of Synthetic Aperture Radar (SAR) imagery. This paper will show that this powerful algorithm can be executed at near real-time speeds and also be implemented in a relatively small piece of hardware. A brief review of the PGA will be presented along with an overview of some critical implementation considerations. In addition, a demonstration of the PGA algorithm running on a 7 in. {times} 10 in. printed circuit board containing a TMS320C30 digital signal processing (DSP) chip will be given. With this system, using only the 20 range bins which contain the brightest points in the image, the algorithm can correct a badly degraded 256 {times} 256 image in as little as 3 seconds. Using all range bins, the algorithm can correct the image in 9 seconds. 4 refs., 2 figs.

  18. Definitions of non-stationary vibration power for time-frequency analysis and computational algorithms based upon harmonic wavelet transform

    NASA Astrophysics Data System (ADS)

    Heo, YongHwa; Kim, Kwang-joon

    2015-02-01

    While the vibration power for a set of harmonic force and velocity signals is well defined and known, it is not as popular yet for a set of stationary random force and velocity processes, although it can be found in some literatures. In this paper, the definition of the vibration power for a set of non-stationary random force and velocity signals will be derived for the purpose of a time-frequency analysis based on the definitions of the vibration power for the harmonic and stationary random signals. The non-stationary vibration power, defined as the short-time average of the product of the force and velocity over a given frequency range of interest, can be calculated by three methods: the Wigner-Ville distribution, the short-time Fourier transform, and the harmonic wavelet transform. The latter method is selected in this paper because band-pass filtering can be done without phase distortions, and the frequency ranges can be chosen very flexibly for the time-frequency analysis. Three algorithms for the time-frequency analysis of the non-stationary vibration power using the harmonic wavelet transform are discussed. The first is an algorithm for computation according to the full definition, while the others are approximate. Noting that the force and velocity decomposed into frequency ranges of interest by the harmonic wavelet transform are constructed with coefficients and basis functions, for the second algorithm, it is suggested to prepare a table of time integrals of the product of the basis functions in advance, which are independent of the signals under analysis. How to prepare and utilize the integral table are presented. The third algorithm is based on an evolutionary spectrum. Applications of the algorithms to the time-frequency analysis of the vibration power transmitted from an excitation source to a receiver structure in a simple mechanical system consisting of a cantilever beam and a reaction wheel are presented for illustration.

  19. Performance of humans vs. exploration algorithms on the Tower of London Test.

    PubMed

    Fimbel, Eric; Lauzon, Stéphane; Rainville, Constant

    2009-01-01

    The Tower of London Test (TOL) used to assess executive functions was inspired in Artificial Intelligence tasks used to test problem-solving algorithms. In this study, we compare the performance of humans and of exploration algorithms. Instead of absolute execution times, we focus on how the execution time varies with the tasks and/or the number of moves. This approach used in Algorithmic Complexity provides a fair comparison between humans and computers, although humans are several orders of magnitude slower. On easy tasks (1 to 5 moves), healthy elderly persons performed like exploration algorithms using bounded memory resources, i.e., the execution time grew exponentially with the number of moves. This result was replicated with a group of healthy young participants. However, for difficult tasks (5 to 8 moves) the execution time of young participants did not increase significantly, whereas for exploration algorithms, the execution time keeps on increasing exponentially. A pre-and post-test control task showed a 25% improvement of visuo-motor skills but this was insufficient to explain this result. The findings suggest that naive participants used systematic exploration to solve the problem but under the effect of practice, they developed markedly more efficient strategies using the information acquired during the test. PMID:19787066

  20. Speech Planning Happens before Speech Execution: Online Reaction Time Methods in the Study of Apraxia of Speech

    ERIC Educational Resources Information Center

    Maas, Edwin; Mailend, Marja-Liisa

    2012-01-01

    Purpose: The purpose of this article is to present an argument for the use of online reaction time (RT) methods to the study of apraxia of speech (AOS) and to review the existing small literature in this area and the contributions it has made to our fundamental understanding of speech planning (deficits) in AOS. Method: Following a brief…

  1. Performance of a streaming mesh refinement algorithm.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2004-08-01

    In SAND report 2004-1617, we outline a method for edge-based tetrahedral subdivision that does not rely on saving state or communication to produce compatible tetrahedralizations. This report analyzes the performance of the technique by characterizing (a) mesh quality, (b) execution time, and (c) traits of the algorithm that could affect quality or execution time differently for different meshes. It also details the method used to debug the several hundred subdivision templates that the algorithm relies upon. Mesh quality is on par with other similar refinement schemes and throughput on modern hardware can exceed 600,000 output tetrahedra per second. But if you want to understand the traits of the algorithm, you have to read the report!

  2. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  3. A real-time algorithm for integrating differential satellite and inertial navigation information during helicopter approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hoang, TY

    1994-01-01

    A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).

  4. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  5. `Inter-Arrival Time' Inspired Algorithm and its Application in Clustering and Molecular Phylogeny

    NASA Astrophysics Data System (ADS)

    Kolekar, Pandurang S.; Kale, Mohan M.; Kulkarni-Kale, Urmila

    2010-10-01

    Bioinformatics, being multidisciplinary field, involves applications of various methods from allied areas of Science for data mining using computational approaches. Clustering and molecular phylogeny is one of the key areas in Bioinformatics, which help in study of classification and evolution of organisms. Molecular phylogeny algorithms can be divided into distance based and character based methods. But most of these methods are dependent on pre-alignment of sequences and become computationally intensive with increase in size of data and hence demand alternative efficient approaches. `Inter arrival time distribution' (IATD) is a popular concept in the theory of stochastic system modeling but its potential in molecular data analysis has not been fully explored. The present study reports application of IATD in Bioinformatics for clustering and molecular phylogeny. The proposed method provides IATDs of nucleotides in genomic sequences. The distance function based on statistical parameters of IATDs is proposed and distance matrix thus obtained is used for the purpose of clustering and molecular phylogeny. The method is applied on a dataset of 3' non-coding region sequences (NCR) of Dengue virus type 3 (DENV-3), subtype III, reported in 2008. The phylogram thus obtained revealed the geographical distribution of DENV-3 isolates. Sri Lankan DENV-3 isolates were further observed to be clustered in two sub-clades corresponding to pre and post Dengue hemorrhagic fever emergence groups. These results are consistent with those reported earlier, which are obtained using pre-aligned sequence data as an input. These findings encourage applications of the IATD based method in molecular phylogenetic analysis in particular and data mining in general.

  6. A Comparison of the Misconceptions about the Time-Efficiency of Algorithms by Various Profiles of Computer-Programming Students

    ERIC Educational Resources Information Center

    Ozdener, Nesrin

    2008-01-01

    This study focuses on how students in vocational high schools and universities interpret the algorithms in structural computer programming that concerns time-efficiency. The targeted research group consisted of 242 students from two vocational high schools and two departments of the Faculty of Education in Istanbul. This study used qualitative and…

  7. Asynchronous space-time algorithm based on a domain decomposition method for structural dynamics problems on non-matching meshes

    NASA Astrophysics Data System (ADS)

    Subber, Waad; Matouš, Karel

    2016-02-01

    Large-scale practical engineering problems featuring localized phenomena often benefit from local control of mesh and time resolutions to efficiently capture the spatial and temporal scales of interest. To this end, we propose an asynchronous space-time algorithm based on a domain decomposition method for structural dynamics problems on non-matching meshes. The three-field algorithm is based on the dual-primal like domain decomposition approach utilizing the localized Lagrange multipliers along the space and time common-refinement-based interface. The proposed algorithm is parallel in nature and well suited for a heterogeneous computing environment. Moreover, two-levels of parallelism are embedded in this novel scheme. For linear dynamical problems, the algorithm is unconditionally stable, shows an optimal order of convergence with respect to space and time discretizations as well as ensures conservation of mass, momentum and energy across the non-matching grid interfaces. The method of manufactured solutions is used to verify the implementation, and an engineering application is considered, where a sandwich plate is impacted by a projectile.

  8. Hardware acceleration of lucky-region fusion (LRF) algorithm for high-performance real-time video processing

    NASA Astrophysics Data System (ADS)

    Browning, Tyler; Jackson, Christopher; Cayci, Furkan; Carhart, Gary W.; Liu, J. J.; Kiamilev, Fouad

    2015-06-01

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames from fast, high-resolution image sensors, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on CPU and field programmable gate array (FPGA) platforms. The CPU did not have sufficient processing power to handle real-time processing of video. Last year, we presented a real-time LRF implementation using an FPGA. However, due to the slow register-transfer level (RTL) development and simulation time, it was difficult to adjust and discover optimal LRF settings such as Gaussian kernel radius and synthetic frame buffer size. To overcome this limitation, we implemented the LRF algorithm on an off-the-shelf graphical processing unit (GPU) in order to take advantage of built-in parallelization and significantly faster development time. Our initial results show that the unoptimized GPU implementation has almost comparable turbulence mitigation to the FPGA version. In our presentation, we will explore optimization of the LRF algorithm on the GPU to achieve higher performance results, and adding new performance capabilities such as image stabilization.

  9. Dual-Byte-Marker Algorithm for Detecting JFIF Header

    NASA Astrophysics Data System (ADS)

    Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat

    The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.

  10. Local flow management/profile descent algorithm. Fuel-efficient, time-controlled profiles for the NASA TSRV airplane

    NASA Technical Reports Server (NTRS)

    Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.

    1986-01-01

    The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.

  11. Applying the Ramer-Douglas-Peucker algorithm to compress and characterize time-series and spatial fields of precipitation

    NASA Astrophysics Data System (ADS)

    Ehret, Uwe; Neuper, Malte

    2014-05-01

    Well known in image processing and computer graphics, the Ramer-Douglas-Peucker(RDP) algorithm (Ramer, 1972; Douglas and Peucker, 1973) is a procedure to approximate a polygon (lines or areas) by a subset of its nodes. Typically it is used to represent a polygonal feature on a larger scale, e.g. when zooming out of an image. The algorithm is simple but effective: Starting from the simplest possible approximation of the original polygon (for a line it is the start and end point), the simplified polygon is built by successively adding always the node of the original polygon farthest from the simplified polygon. This is repeated until a chosen agreement between the original and the simplified polygon is reached. Compared to other smoothing and compression algorithms like moving-average filters or block aggregation, the RDP algorithm has the advantages that i) the simplified polygon is built from the original points, i.e. extreme values are preserved and ii) that the variability of the original polygon is preserved in a scale-independent manner, i.e. the simplified polygon is high-resolution where necessary and low-resolution where possible. Applying the RDP algorithm to time series of precipitation or 2d spatial fields of radar rainfall often reveals a large degree of compressibility while losing almost no information. In general, this is the case for any auto-correlated polygon such as discharge time series etc. While the RDP algorithm is thus interesting as a very efficient tool for compression, it can also be used to characterize time series or spatial fields with respect to their temporal or spatial structure by relating, over successive steps of simplification, the compression achieved and information lost. We will present and discuss the characteristics of the RDP-based compression and characterization at various examples, both observed (rainfall and discharge time series, 2-d radar rainfall fields) and artificial (random noise fields, random fields with known

  12. The effect of on/off indicator design on state confusion, preference, and response time performance, executive summary

    NASA Technical Reports Server (NTRS)

    Donner, Kimberly A.; Holden, Kritina L.; Manahan, Meera K.

    1991-01-01

    Investigated are five designs of software-based ON/OFF indicators in a hypothetical Space Station Power System monitoring task. The hardware equivalent of the indicators used in the present study is the traditional indicator light that illuminates an ON label or an OFF label. Coding methods used to represent the active state were reverse video, color, frame, check, or reverse video with check. Display background color was also varied. Subjects made judgments concerning the state of indicators that resulted in very low error rates and high percentages of agreement across indicator designs. Response time measures for each of the five indicator designs did not differ significantly, although subjects reported that color was the best communicator. The impact of these results on indicator design is discussed.

  13. A Global R&D Program on Liquid Ar Time Projection Chambers Under Execution at the University of Bern

    NASA Astrophysics Data System (ADS)

    Badhrees, I.; Ereditato, A.; Janos, S.; Kreslo, I.; Messina, M.; Haug, S.; Rossi, B.; Rohr, C. Rudolf von; Weber, M.; Zeller, M.

    A comprehensive R&D program on LAr Time Projection Chambers (LAr TPC) is presently being carried out at the University of Bern. Many aspects of this technology are under investigation: HV, purity, calibration, readout, etc. Furthermore, multi-photon interaction of UV-laser beams with LAr has successfully been measured. Possible applications of the LAr TPC technology in the field of homeland security are also being studied. In this paper, the main aspects of the program will be reviewed and the achievements underlined. Emphasis will be given to the largest device in Bern, i.e. the 5 m long ARGONTUBE TPC, meant to prove the feasibility of very long drifts in view of future large scale applications of the technique.

  14. A combined Event-Driven/Time-Driven molecular dynamics algorithm for the simulation of shock waves in rarefied gases

    SciTech Connect

    Valentini, Paolo Schwartzentruber, Thomas E.

    2009-12-10

    A novel combined Event-Driven/Time-Driven (ED/TD) algorithm to speed-up the Molecular Dynamics simulation of rarefied gases using realistic spherically symmetric soft potentials is presented. Due to the low density regime, the proposed method correctly identifies the time that must elapse before the next interaction occurs, similarly to Event-Driven Molecular Dynamics. However, each interaction is treated using Time-Driven Molecular Dynamics, thereby integrating Newton's Second Law using the sufficiently small time step needed to correctly resolve the atomic motion. Although infrequent, many-body interactions are also accounted for with a small approximation. The combined ED/TD method is shown to correctly reproduce translational relaxation in argon, described using the Lennard-Jones potential. For densities between {rho}=10{sup -4}kg/m{sup 3} and {rho}=10{sup -1}kg/m{sup 3}, comparisons with kinetic theory, Direct Simulation Monte Carlo, and pure Time-Driven Molecular Dynamics demonstrate that the ED/TD algorithm correctly reproduces the proper collision rates and the evolution toward thermal equilibrium. Finally, the combined ED/TD algorithm is applied to the simulation of a Mach 9 shock wave in rarefied argon. Density and temperature profiles as well as molecular velocity distributions accurately match DSMC results, and the shock thickness is within the experimental uncertainty. For the problems considered, the ED/TD algorithm ranged from several hundred to several thousand times faster than conventional Time-Driven MD. Moreover, the force calculation to integrate the molecular trajectories is found to contribute a negligible amount to the overall ED/TD simulation time. Therefore, this method could pave the way for the application of much more refined and expensive interatomic potentials, either classical or first-principles, to Molecular Dynamics simulations of shock waves in rarefied gases, involving vibrational nonequilibrium and chemical reactivity.

  15. An algorithm on distributed mining association rules

    NASA Astrophysics Data System (ADS)

    Xu, Fan

    2005-12-01

    With the rapid development of the Internet/Intranet, distributed databases have become a broadly used environment in various areas. It is a critical task to mine association rules in distributed databases. The algorithms of distributed mining association rules can be divided into two classes. One is a DD algorithm, and another is a CD algorithm. A DD algorithm focuses on data partition optimization so as to enhance the efficiency. A CD algorithm, on the other hand, considers a setting where the data is arbitrarily partitioned horizontally among the parties to begin with, and focuses on parallelizing the communication. A DD algorithm is not always applicable, however, at the time the data is generated, it is often already partitioned. In many cases, it cannot be gathered and repartitioned for reasons of security and secrecy, cost transmission, or sheer efficiency. A CD algorithm may be a more appealing solution for systems which are naturally distributed over large expenses, such as stock exchange and credit card systems. An FDM algorithm provides enhancement to CD algorithm. However, CD and FDM algorithms are both based on net-structure and executing in non-shareable resources. In practical applications, however, distributed databases often are star-structured. This paper proposes an algorithm based on star-structure networks, which are more practical in application, have lower maintenance costs and which are more practical in the construction of the networks. In addition, the algorithm provides high efficiency in communication and good extension in parallel computation.

  16. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  17. Time-based and event-based prospective memory in autism spectrum disorder: the roles of executive function and theory of mind, and time-estimation.

    PubMed

    Williams, David; Boucher, Jill; Lind, Sophie; Jarrold, Christopher

    2013-07-01

    Prospective memory (remembering to carry out an action in the future) has been studied relatively little in ASD. We explored time-based (carry out an action at a pre-specified time) and event-based (carry out an action upon the occurrence of a pre-specified event) prospective memory, as well as possible cognitive correlates, among 21 intellectually high-functioning children with ASD, and 21 age- and IQ-matched neurotypical comparison children. We found impaired time-based, but undiminished event-based, prospective memory among children with ASD. In the ASD group, time-based prospective memory performance was associated significantly with diminished theory of mind, but not with diminished cognitive flexibility. There was no evidence that time-estimation ability contributed to time-based prospective memory impairment in ASD. PMID:23179340

  18. Preliminary Development and Evaluation of Lightning Jump Algorithms for the Real-Time Detection of Severe Weather

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the

  19. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    NASA Technical Reports Server (NTRS)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  20. Development of a Real-Time Pulse Processing Algorithm for TES-Based X-Ray Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Tan, Hui; Hennig, Wolfgang; Warburton, William K.; Doriese, W. Bertrand; Kilbourne, Caroline A.

    2011-01-01

    We report here a real-time pulse processing algorithm for superconducting transition-edge sensor (TES) based x-ray microcalorimeters. TES-based. microca1orimeters offer ultra-high energy resolutions, but the small volume of each pixel requires that large arrays of identical microcalorimeter pixe1s be built to achieve sufficient detection efficiency. That in turn requires as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of data to a host computer for post-processing. Therefore, a real-time pulse processing algorithm that not only can be implemented in the readout electronics but also achieve satisfactory energy resolutions is desired. We have developed an algorithm that can be easily implemented. in hardware. We then tested the algorithm offline using several data sets acquired with an 8 x 8 Goddard TES x-ray calorimeter array and 2x16 NIST time-division SQUID multiplexer. We obtained an average energy resolution of close to 3.0 eV at 6 keV for the multiplexed pixels while preserving over 99% of the events in the data sets.

  1. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  2. Delay Analysis of Max-Weight Queue Algorithm for Time-Varying Wireless Ad hoc Networks—Control Theoretical Approach

    NASA Astrophysics Data System (ADS)

    Chen, Junting; Lau, Vincent K. N.

    2013-01-01

    Max weighted queue (MWQ) control policy is a widely used cross-layer control policy that achieves queue stability and a reasonable delay performance. In most of the existing literature, it is assumed that optimal MWQ policy can be obtained instantaneously at every time slot. However, this assumption may be unrealistic in time varying wireless systems, especially when there is no closed-form MWQ solution and iterative algorithms have to be applied to obtain the optimal solution. This paper investigates the convergence behavior and the queue delay performance of the conventional MWQ iterations in which the channel state information (CSI) and queue state information (QSI) are changing in a similar timescale as the algorithm iterations. Our results are established by studying the stochastic stability of an equivalent virtual stochastic dynamic system (VSDS), and an extended Foster-Lyapunov criteria is applied for the stability analysis. We derive a closed form delay bound of the wireless network in terms of the CSI fading rate and the sensitivity of MWQ policy over CSI and QSI. Based on the equivalent VSDS, we propose a novel MWQ iterative algorithm with compensation to improve the tracking performance. We demonstrate that under some mild conditions, the proposed modified MWQ algorithm converges to the optimal MWQ control despite the time-varying CSI and QSI.

  3. Near real-time expectation-maximization algorithm: computational performance and passive millimeter-wave imaging field test results

    NASA Astrophysics Data System (ADS)

    Reynolds, William R.; Talcott, Denise; Hilgers, John W.

    2002-07-01

    A new iterative algorithm (EMLS) via the expectation maximization method is derived for extrapolating a non- negative object function from noisy, diffraction blurred image data. The algorithm has the following desirable attributes; fast convergence is attained for high frequency object components, is less sensitive to constraint parameters, and will accommodate randomly missing data. Speed and convergence results are presented. Field test imagery was obtained with a passive millimeter wave imaging sensor having a 30.5 cm aperture. The algorithm was implemented and tested in near real time using field test imagery. Theoretical results and experimental results using the field test imagery will be compared using an effective aperture measure of resolution increase. The effective aperture measure, based on examination of the edge-spread function, will be detailed.

  4. Algorithm for real-time detection of signal patterns using phase synchrony: an application to an electrode array

    NASA Astrophysics Data System (ADS)

    Sadeghi, Saman; MacKay, William A.; van Dam, R. Michael; Thompson, Michael

    2011-02-01

    Real-time analysis of multi-channel spatio-temporal sensor data presents a considerable technical challenge for a number of applications. For example, in brain-computer interfaces, signal patterns originating on a time-dependent basis from an array of electrodes on the scalp (i.e. electroencephalography) must be analyzed in real time to recognize mental states and translate these to commands which control operations in a machine. In this paper we describe a new technique for recognition of spatio-temporal patterns based on performing online discrimination of time-resolved events through the use of correlation of phase dynamics between various channels in a multi-channel system. The algorithm extracts unique sensor signature patterns associated with each event during a training period and ranks importance of sensor pairs in order to distinguish between time-resolved stimuli to which the system may be exposed during real-time operation. We apply the algorithm to electroencephalographic signals obtained from subjects tested in the neurophysiology laboratories at the University of Toronto. The extension of this algorithm for rapid detection of patterns in other sensing applications, including chemical identification via chemical or bio-chemical sensor arrays, is also discussed.

  5. A new method of real-time signal extraction for diffuse reflection laser ranging based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Peng; Zhang, Yan; Qian, Weiping

    2015-10-01

    Diffuse reflection laser ranging is one of the feasible ways to realize high precision measurement of the space debris. However, the weak echo of diffuse reflection results in a poor signal-to-noise ratio. Thus, it is difficult to realize the real-time signal extraction for diffuse reflection laser ranging when echo signal photons are blocked by a large amount of noise photons. The Genetic Algorithm, originally evolved from the idea of natural selection process, is a heuristic search algorithm which is famous for the adaptive optimization and the global search ability. To the best of our knowledge, this paper is the first one to propose a method of real-time signal extraction for diffuse reflection laser ranging based on Genetic Algorithm. The extraction results are regarded as individuals in the population. Besides, short-term linear fitting degree and data correlation level are used as selection criteria to search for an optimal solution. Fine search in the real-time data part gives the suitable new data quickly in real-time signal extraction. A coarse search in both historical data and real-time data after the fine search is designed. The co-evolution of both parts can increase the search accuracy of real-time data as well as the precision of the history data. Simulation experiments show that our method has good signal extraction capability in poor signal-to-noise ratio circumstance, especially for data with high correlation.

  6. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  7. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1989-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.

  8. Algorithm for automated analysis of surface vibrations using time-averaged digital speckle pattern interferometry.

    PubMed

    Krzemien, Leszek; Lukomski, Michal

    2012-07-20

    A fully automated algorithm was developed for the recording and analysis of vibrating objects with the help of digital speckle pattern interferometry utilizing continuous-wave laser light. A series of measurements were performed with increasing force inducing vibration to allow the spatial distribution of vibration amplitude to be reconstructed on the object's surface. The developed algorithm uses Hilbert transformation for an independent, quantitative evaluation of the Bessel function at every point of the investigated surface. The procedure does not require phase modulation, and thus can be implemented within any, even the simplest, DSPI apparatus. The proposed deformation analysis is fast and computationally inexpensive. PMID:22858957

  9. BRAIN 2.0: Time and Memory Complexity Improvements in the Algorithm for Calculating the Isotope Distribution

    NASA Astrophysics Data System (ADS)

    Dittwald, Piotr; Valkenborg, Dirk

    2014-04-01

    Recently, an elegant iterative algorithm called BRAIN ( Baffling Recursive Algorithm for Isotopic distributio N calculations) was presented. The algorithm is based on the classic polynomial method for calculating aggregated isotope distributions, and it introduces algebraic identities using Newton-Girard and Viète's formulae to solve the problem of polynomial expansion. Due to the iterative nature of the BRAIN method, it is a requirement that the calculations start from the lightest isotope variant. As such, the complexity of BRAIN scales quadratically with the mass of the putative molecule, since it depends on the number of aggregated peaks that need to be calculated. In this manuscript, we suggest two improvements of the algorithm to decrease both time and memory complexity in obtaining the aggregated isotope distribution. We also illustrate a concept to represent the element isotope distribution in a generic manner. This representation allows for omitting the root calculation of the element polynomial required in the original BRAIN method. A generic formulation for the roots is of special interest for higher order element polynomials such that root finding algorithms and its inaccuracies can be avoided.

  10. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  11. Time-Based and Event-Based Prospective Memory in Autism Spectrum Disorder: The Roles of Executive Function and Theory of Mind, and Time-Estimation

    ERIC Educational Resources Information Center

    Williams, David; Boucher, Jill; Lind, Sophie; Jarrold, Christopher

    2013-01-01

    Prospective memory (remembering to carry out an action in the future) has been studied relatively little in ASD. We explored time-based (carry out an action at a pre-specified time) and event-based (carry out an action upon the occurrence of a pre-specified event) prospective memory, as well as possible cognitive correlates, among 21…

  12. The effects of initial conditions and control time on optimal actuator placement via a max-min Genetic Algorithm

    SciTech Connect

    Redmond, J.; Parker, G.

    1993-07-01

    This paper examines the role of the control objective and the control time in determining fuel-optimal actuator placement for structural vibration suppression. A general theory is developed that can be easily extended to include alternative performance metrics such as energy and time-optimal control. The performance metric defines a convex admissible control set which leads to a max-min optimization problem expressing optimal location as a function of initial conditions and control time. A solution procedure based on a nested Genetic Algorithm is presented and applied to an example problem. Results indicate that the optimal locations vary widely as a function of control time and initial conditions.

  13. Novel algorithm and MATLAB-based program for automated power law analysis of single particle, time-dependent mean-square displacement

    NASA Astrophysics Data System (ADS)

    Umansky, Moti; Weihs, Daphne

    2012-08-01

    should also be backwards compatible. Symbolic Math Toolboxes (5.5) is required. The Curve Fitting Toolbox (3.0) is recommended. Computer: Tested on Windows only, yet should work on any computer running MATLAB. In Windows 7, should be used as administrator, if the user is not the administrator the program may not be able to save outputs and temporary outputs to all locations. Operating system: Any supporting MATLAB (MathWorks Inc.) v7.11 / 2010b or higher. Supplementary material: Sample output files (approx. 30 MBytes) are available. Classification: 12 External routines: Several MATLAB subfunctions (m-files), freely available on the web, were used as part of and included in, this code: count, NaN suite, parseArgs, roundsd, subaxis, wcov, wmean, and the executable pdfTK.exe. Nature of problem: In many physical and biophysical areas employing single-particle tracking, having the time-dependent power-laws governing the time-averaged meansquare displacement (MSD) of a single particle is crucial. Those power laws determine the mode-of-motion and hint at the underlying mechanisms driving motion. Accurate determination of the power laws that describe each trajectory will allow categorization into groups for further analysis of single trajectories or ensemble analysis, e.g. ensemble and time-averaged MSD. Solution method: The algorithm in the provided program automatically analyzes and fits time-dependent power laws to single particle trajectories, then group particles according to user defined cutoffs. It accepts time-dependent trajectories of several particles, each trajectory is run through the program, its time-averaged MSD is calculated, and power laws are determined in regions where the MSD is linear on a log-log scale. Our algorithm searches for high-curvature points in experimental data, here time-dependent MSD. Those serve as anchor points for determining the ranges of the power-law fits. Power-law scaling is then accurately determined and error estimations of the

  14. [Fractal dimension and histogram method: algorithm and some preliminary results of noise-like time series analysis].

    PubMed

    Pancheliuga, V A; Pancheliuga, M S

    2013-01-01

    In the present work a methodological background for the histogram method of time series analysis is developed. Connection between shapes of smoothed histograms constructed on the basis of short segments of time series of fluctuations and the fractal dimension of the segments is studied. It is shown that the fractal dimension possesses all main properties of the histogram method. Based on it a further development of fractal dimension determination algorithm is proposed. This algorithm allows more precision determination of the fractal dimension by using the "all possible combination" method. The application of the method to noise-like time series analysis leads to results, which could be obtained earlier only by means of the histogram method based on human expert comparisons of histograms shapes. PMID:23755565

  15. Platform for Real-Time Simulation of Dynamic Systems and Hardware-in-the-Loop for Control Algorithms

    PubMed Central

    de Souza, Isaac D. T.; Silva, Sergio N.; Teles, Rafael M.; Fernandes, Marcelo A. C.

    2014-01-01

    The development of new embedded algorithms for automation and control of industrial equipment usually requires the use of real-time testing. However, the equipment required is often expensive, which means that such tests are often not viable. The objective of this work was therefore to develop an embedded platform for the distributed real-time simulation of dynamic systems. This platform, called the Real-Time Simulator for Dynamic Systems (RTSDS), could be applied in both industrial and academic environments. In industrial applications, the RTSDS could be used to optimize embedded control algorithms. In the academic sphere, it could be used to support research into new embedded solutions for automation and control and could also be used as a tool to assist in undergraduate and postgraduate teaching related to the development of projects concerning on-board control systems. PMID:25320906

  16. Platform for real-time simulation of dynamic systems and hardware-in-the-loop for control algorithms.

    PubMed

    de Souza, Isaac D T; Silva, Sergio N; Teles, Rafael M; Fernandes, Marcelo A C

    2014-01-01

    The development of new embedded algorithms for automation and control of industrial equipment usually requires the use of real-time testing. However, the equipment required is often expensive, which means that such tests are often not viable. The objective of this work was therefore to develop an embedded platform for the distributed real-time simulation of dynamic systems. This platform, called the Real-Time Simulator for Dynamic Systems (RTSDS), could be applied in both industrial and academic environments. In industrial applications, the RTSDS could be used to optimize embedded control algorithms. In the academic sphere, it could be used to support research into new embedded solutions for automation and control and could also be used as a tool to assist in undergraduate and postgraduate teaching related to the development of projects concerning on-board control systems. PMID:25320906

  17. A self-adaptive parameter optimization algorithm in a real-time parallel image processing system.

    PubMed

    Li, Ge; Zhang, Xuehe; Zhao, Jie; Zhang, Hongli; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    Aiming at the stalemate that precision, speed, robustness, and other parameters constrain each other in the parallel processed vision servo system, this paper proposed an adaptive load capacity balance strategy on the servo parameters optimization algorithm (ALBPO) to improve the computing precision and to achieve high detection ratio while not reducing the servo circle. We use load capacity functions (LC) to estimate the load for each processor and then make continuous self-adaptation towards a balanced status based on the fluctuated LC results; meanwhile, we pick up a proper set of target detection and location parameters according to the results of LC. Compared with current load balance algorithm, the algorithm proposed in this paper is proceeded under an unknown informed status about the maximum load and the current load of the processors, which means it has great extensibility. Simulation results showed that the ALBPO algorithm has great merits on load balance performance, realizing the optimization of QoS for each processor, fulfilling the balance requirements of servo circle, precision, and robustness of the parallel processed vision servo system. PMID:24174920

  18. Combined space and time convergence analysis of a compressible flow algorithm

    SciTech Connect

    Kamm, J. R.; Rider, William; Brock, J. S.

    2002-01-01

    In this study, we quantify both the spatial and temporal convergence behavior simultaneously for various algorithms for the two-dimensional Euler equations of gasdynamics. Such an analysis falls under the rubric of verification, which is the process of determining whether a simulation code accurately represents the code developers description of the model (e.g., equations, boundary conditions, etc.). The recognition that verification analysis is a necessary and valuable activity continues to increase among computational fluid dynamics practicioners. Using computed results and a known solution, one can estimate the effective convergence rates of a specific software implementation of a given algorithm and gauge those results relative to the design properties of the algorithm. In the aerodynamics community, such analyses are typically performed to evaluate the performance of spatial integrators; analogous convergence analysis for temporal integrators can also be performed. Our approach combines these two usually separate activities into the same analysis framework. To accomplish this task, we outline a procedure in which a known solution together with a set of computed results, obtained for a number of different spatial and temporal discretizations, are employed to determine the complete convergence properties of the combined spatio-temporal algorithm. Such an approach is of particular interest for Lax-Wendroff-type integration schemes, where the specific impact of either the spatial or temporal integrators alone cannot be easily deconvolved from computed results. Unlike the more common spatial convergence analysis, the combined spatial and temporal analysis leads to a set of nonlinear equations that must be solved numerically. The unknowns in this set of equations are various parameters, including the asymptotic convergence rates, that quantify the basic performance of the software implementation of the algorithm.

  19. Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad

    2015-05-01

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  20. A Real-time Spectrum Handoff Algorithm for VoIP based Cognitive Radio Networks: Design and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Chakraborty, Tamal; Saha Misra, Iti

    2016-03-01

    Secondary Users (SUs) in a Cognitive Radio Network (CRN) face unpredictable interruptions in transmission due to the random arrival of Primary Users (PUs), leading to spectrum handoff or dropping instances. An efficient spectrum handoff algorithm, thus, becomes one of the indispensable components in CRN, especially for real-time communication like Voice over IP (VoIP). In this regard, this paper investigates the effects of spectrum handoff on the Quality of Service (QoS) for VoIP traffic in CRN, and proposes a real-time spectrum handoff algorithm in two phases. The first phase (VAST-VoIP based Adaptive Sensing and Transmission) adaptively varies the channel sensing and transmission durations to perform intelligent dropping decisions. The second phase (ProReact-Proactive and Reactive Handoff) deploys efficient channel selection mechanisms during spectrum handoff for resuming communication. Extensive performance analysis in analytical and simulation models confirms a decrease in spectrum handoff delay for VoIP SUs by more than 40% and 60%, compared to existing proactive and reactive algorithms, respectively and ensures a minimum 10% reduction in call-dropping probability with respect to the previous works in this domain. The effective SU transmission duration is also maximized under the proposed algorithm, thereby making it suitable for successful VoIP communication.

  1. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  2. A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit

    NASA Astrophysics Data System (ADS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.

  3. Quadrature algorithms to the luminosity distance with a time-dependent dark energy model

    SciTech Connect

    Yue, Nan-Nan; Liu, De-Zi; Pei, Xiao-Xing; Zhang, Tong-Jie; Yang, Zhi-Liang; Zhu, Fang-Fang E-mail: bingzi@mail.bnu.edu.cn E-mail: fiona-90@live.cn E-mail: zlyang@bnu.edu.cn

    2011-11-01

    In our previous work [1], we have proposed two methods for computing the luminosity distance d{sub L}{sup Λ} in ΛCDM model. In this paper, two effective quadrature algorithms, known as Romberg Integration and composite Gaussian Quadrature, are presented to calculate the luminosity distance d{sub L}{sup CPL} in the Chevallier-Polarski-Linder parametrization(CPL) model. By comparing both the efficiency and accuracy of the two algorithms, we find that the second is more promising. Moreover, we develop another strategy adapted for approximating d{sub L}{sup Λ} in flat ΛCDM universe. To some extent, our methods can make contributions to the recent numerical stimulation for the investigation of dark energy cosmology.

  4. An improved real-time endovascular guidewire position simulation using shortest path algorithm.

    PubMed

    Qiu, Jianpeng; Qu, Zhiyi; Qiu, Haiquan; Zhang, Xiaomin

    2016-09-01

    In this study, we propose a new graph-theoretical method to simulate guidewire paths inside the carotid artery. The minimum energy guidewire path can be obtained by applying the shortest path algorithm, such as Dijkstra's algorithm for graphs, based on the principle of the minimal total energy. Compared to previous results, experiments of three phantoms were validated, revealing that the first and second phantoms overlap completely between simulated and real guidewires. In addition, 95 % of the third phantom overlaps completely, and the remaining 5 % closely coincides. The results demonstrate that our method achieves 87 and 80 % improvements for the first and third phantoms under the same conditions, respectively. Furthermore, 91 % improvements were obtained for the second phantom under the condition with reduced graph construction complexity. PMID:26467345

  5. A real-time guidance algorithm for aerospace plane optimal ascent to low earth orbit

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1989-01-01

    Problems of onboard trajectory optimization and synthesis of suitable guidance laws for ascent to low Earth orbit of an air-breathing, single-stage-to-orbit vehicle are addressed. A multimode propulsion system is assumed which incorporates turbojet, ramjet, Scramjet, and rocket engines. An algorithm for generating fuel-optimal climb profiles is presented. This algorithm results from the application of the minimum principle to a low-order dynamic model that includes angle-of-attack effects and the normal component of thrust. Maximum dynamic pressure and maximum aerodynamic heating rate constraints are considered. Switching conditions are derived which, under appropriate assumptions, govern optimal transition from one propulsion mode to another. A nonlinear transformation technique is employed to derived a feedback controller for tracking the computed trajectory. Numerical results illustrate the nature of the resulting fuel-optimal climb paths.

  6. Time domain algorithm for accelerated determination of the first order moment of photo current fluctuations in high speed laser Doppler perfusion imaging.

    PubMed

    Draijer, Matthijs; Hondebrink, Erwin; van Leeuwen, Ton; Steenbergen, Wiendelt

    2009-10-01

    Advances in optical array sensor technology allow for the real time acquisition of dynamic laser speckle patterns generated by tissue perfusion, which, in principle,allows for real time laser Doppler perfusion imaging(LDPI). Exploitation of these developments is enhanced with the introduction of faster algorithms to transform photo currents into perfusion estimates using the first moment of the power spectrum. A time domain (TD)algorithm is presented for determining the first-order spectral moment. Experiments are performed to compare this algorithm with the widely used Fast Fourier Transform(FFT). This study shows that the TD-algorithm is twice as fast as the FFT-algorithm without loss of accuracy.Compared to FFT, the TD-algorithm is efficient in terms of processor time, memory usage and data transport. PMID:19820976

  7. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  8. SNSMIL, a real-time single molecule identification and localization algorithm for super-resolution fluorescence microscopy

    PubMed Central

    Tang, Yunqing; Dai, Luru; Zhang, Xiaoming; Li, Junbai; Hendriks, Johnny; Fan, Xiaoming; Gruteser, Nadine; Meisenberg, Annika; Baumann, Arnd; Katranidis, Alexandros; Gensch, Thomas

    2015-01-01

    Single molecule localization based super-resolution fluorescence microscopy offers significantly higher spatial resolution than predicted by Abbe’s resolution limit for far field optical microscopy. Such super-resolution images are reconstructed from wide-field or total internal reflection single molecule fluorescence recordings. Discrimination between emission of single fluorescent molecules and background noise fluctuations remains a great challenge in current data analysis. Here we present a real-time, and robust single molecule identification and localization algorithm, SNSMIL (Shot Noise based Single Molecule Identification and Localization). This algorithm is based on the intrinsic nature of noise, i.e., its Poisson or shot noise characteristics and a new identification criterion, QSNSMIL, is defined. SNSMIL improves the identification accuracy of single fluorescent molecules in experimental or simulated datasets with high and inhomogeneous background. The implementation of SNSMIL relies on a graphics processing unit (GPU), making real-time analysis feasible as shown for real experimental and simulated datasets. PMID:26098742

  9. A Memetic Algorithm for the Location-Based Continuously Operating Reference Stations Placement Problem in Network Real-Time Kinematic.

    PubMed

    Tang, Maolin

    2015-10-01

    Network real-time kinematic (NRTK) is a technology that can provide centimeter-level accuracy positioning services in real-time, and it is enabled by a network of continuously operating reference stations (CORS). The location-oriented CORS placement problem is an important problem in the design of a NRTK as it will directly affect not only the installation and operational cost of the NRTK, but also the quality of positioning services provided by the NRTK. This paper presents a memetic algorithm (MA) for the location-oriented CORS placement problem, which hybridizes the powerful explorative search capacity of a genetic algorithm and the efficient and effective exploitative search capacity of a local optimization. Experimental results have shown that the MA has better performance than existing approaches. In this paper, we also conduct an empirical study about the scalability of the MA, effectiveness of the hybridization technique and selection of crossover operator in the MA. PMID:25415999

  10. Thermosphere-ionosphere-mesosphere energetics and dynamics (TIMED). The TIMED mission and science program report of the science definition team. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A Science Definition Team was established in December 1990 by the Space Physics Division, NASA, to develop a satellite program to conduct research on the energetics, dynamics, and chemistry of the mesosphere and lower thermosphere/ionosphere. This two-volume publication describes the TIMED (Thermosphere-Ionosphere-Mesosphere, Energetics and Dynamics) mission and associated science program. The report outlines the scientific objectives of the mission, the program requirements, and the approach towards meeting these requirements.

  11. Circuit simulation for large-scale MOSFET and lossy coupled transmission line circuits using multi-rate iterated timing analysis algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Jung; Shin, Tien-Hao

    2012-04-01

    In this paper, we propose methods to perform large-scale circuit simulation for MOSFET circuits containing lossy coupled transmission lines that have been encountered in modern circuit design community. We utilize the fast multi-rate ITA (Iterated Timing Analysis) algorithm and a full time-domain transmission line calculation algorithm based on the Method of Characteristic. Various methods to speedup the transmission line calculation algorithm have been presented. All proposed methods have been implemented and tested to justify their superior performance.

  12. Ground-based time-guidance algorithm for control of airplanes in a time-metered air traffic control environment: A piloted simulation study

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Imbert, N.

    1986-01-01

    The rapidly increasing costs of flight operations and the requirement for increased fuel conservation have made it necessary to develop more efficient ways to operate airplanes and to control air traffic for arrivals and departures to the terminal area. One concept of controlling arrival traffic through time metering has been jointly studied and evaluated by NASA and ONERA/CERT in piloted simulation tests. From time errors attained at checkpoints, airspeed and heading commands issued by air traffic control were computed by a time-guidance algorithm for the pilot to follow that would cause the airplane to cross a metering fix at a preassigned time. These tests resulted in the simulated airplane crossing a metering fix with a mean time error of 1.0 sec and a standard deviation of 16.7 sec when the time-metering algorithm was used. With mismodeled winds representing the unknown in wind-aloft forecasts and modeling form, the mean time error attained when crossing the metering fix was increased and the standard deviation remained approximately the same. The subject pilots reported that the airspeed and heading commands computed in the guidance concept were easy to follow and did not increase their work load above normal levels.

  13. A real-time GPU implementation of the SIFT algorithm for large-scale video analysis tasks

    NASA Astrophysics Data System (ADS)

    Fassold, Hannes; Rosner, Jakub

    2015-02-01

    The SIFT algorithm is one of the most popular feature extraction methods and therefore widely used in all sort of video analysis tasks like instance search and duplicate/ near-duplicate detection. We present an efficient GPU implementation of the SIFT descriptor extraction algorithm using CUDA. The major steps of the algorithm are presented and for each step we describe how to efficiently parallelize it massively, how to take advantage of the unique capabilities of the GPU like shared memory / texture memory and how to avoid or minimize common GPU performance pitfalls. We compare the GPU implementation with the reference CPU implementation in terms of runtime and quality and achieve a speedup factor of approximately 3 - 5 for SD and 5 - 6 for Full HD video with respect to a multi-threaded CPU implementation, allowing us to run the SIFT descriptor extraction algorithm in real-time on SD video. Furthermore, quality tests show that the GPU implementation gives the same quality as the reference CPU implementation from the HessSIFT library. We further describe the benefits of GPU-accelerated SIFT descriptor calculation for video analysis applications such as near-duplicate video detection.

  14. Passive Fourier-transform infrared spectroscopy of chemical plumes: an algorithm for quantitative interpretation and real-time background removal

    NASA Astrophysics Data System (ADS)

    Polak, Mark L.; Hall, Jeffrey L.; Herr, Kenneth C.

    1995-08-01

    We present a ratioing algorithm for quantitative analysis of the passive Fourier-transform infrared spectrum of a chemical plume. We show that the transmission of a near-field plume is given by tau plume = (Lobsd - Lbb-plume)/(Lbkgd - Lbb-plume), where tau plume is the frequency-dependent transmission of the plume, L obsd is the spectral radiance of the scene that contains the plume, Lbkgd is the spectral radiance of the same scene without the plume, and Lbb-plume is the spectral radiance of a blackbody at the plume temperature. The algorithm simultaneously achieves background removal, elimination of the spectrometer internal signature, and quantification of the plume spectral transmission. It has applications to both real-time processing for plume visualization and quantitative measurements of plume column densities. The plume temperature (Lbb-plume ), which is not always precisely known, can have a profound effect on the quantitative interpretation of the algorithm and is discussed in detail. Finally, we provide an illustrative example of the use of the algorithm on a trichloroethylene and acetone plume.

  15. A generalized deconvolution algorithm for image reconstruction in positron emission tomography with time-of-flight information (TOFPET)

    SciTech Connect

    Chen, C.T.; Metz, C.E.

    1984-01-01

    Positron emission tomographic systems capable of time-of-flight measurements open new avenues for image reconstruction. Three algorithms have been proposed previously: the most-likely position method (MLP), the confidence weighting method (CW) and the estimated posterior-density weighting method (EPDW). While MLP suffers from poorer noise properties, both CW and EPDW require substantially more computer processing time. Mathematically, the TOFPET image data at any projection angle represents a 2D image blurred by different TOF and detector spatial resolutions in two perpendicular directions. The integration of TOFPET images over all angles produces a preprocessed 2D image which is the convolution of the true image and a rotationally symmetric point spread function (PSF). Hence the tomographic reconstruction problem for TOFPET can be viewed as nothing more than a 2D image processing task to compensate for a known PSF. A new algorithm based on a generalized iterative deconvolution method and its equivalent filters (''Metz filters'') developed earlier for conventional nuclear medicine image processing is proposed for this purpose. The algorithm can be carried out in a single step by an equivalent filter in the frequency domain; therefore, much of the computation time necessary for CW and EPDW is avoided. Results from computer simulation studies show that this new approach provides excellent resolution enhancement at low frequencies, good noise suppression at high frequencies, a reduction of Gibbs' phenomenon due to sharp filter cutoff, and better quantitative measurements than other methods.

  16. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    SciTech Connect

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-06-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.

  17. A Framework and Algorithms for Multivariate Time Series Analytics (MTSA): Learning, Monitoring, and Recommendation

    ERIC Educational Resources Information Center

    Ngan, Chun-Kit

    2013-01-01

    Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…

  18. So much to do, so little time. To accomplish the mandatory initiatives of ARRA, healthcare organizations will require significant and thoughtful planning, prioritization and execution.

    PubMed

    Klein, Kimberly

    2010-01-01

    The American Recovery and Reinvestment Act of 2009 (ARRA) has set forth legislation for the healthcare community to achieve adoption of electronic health records (EHR), as well as form data standards, health information exchanges (HIE) and compliance with more stringent security and privacy controls under the HITECH Act. While the Office of the National Coordinator for Health Information Technology (ONCHIT) works on the definition of both "meaningful use" and "certification" of information technology systems, providers in particular must move forward with their IT initiatives to achieve the basic requirements for Medicare and Medicaid incentives starting in 2011, and avoid penalties that will reduce reimbursement beginning in 2015. In addition, providers, payors, government and non-government stakeholders will all have to balance the implementation of EHRs, working with HIEs, at the same time that they must upgrade their systems to be in compliance with ICD-10 and HIPAA 5010 code sets. Compliance deadlines for EHRs and HIEs begin in 2011, while ICD-10 diagnosis and procedure code sets compliance is required by October 2013 and HIPAA 5010 transaction sets, with one exception, is required by January 1, 2012. In order to accomplish these strategic and mandatory initiatives successfully and simultaneously, healthcare organizations will require significant and thoughtful planning, prioritization and execution. PMID:20077923

  19. An efficient algorithm based on splitting for the time integration of the Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Blanes, Sergio; Casas, Fernando; Murua, Ander

    2015-12-01

    We present a practical algorithm based on symplectic splitting methods intended for the numerical integration in time of the Schrödinger equation when the Hamiltonian operator is either time-independent or changes slowly with time. In the later case, the evolution operator can be effectively approximated in a step-by-step manner: first divide the time integration interval in sufficiently short subintervals, and then successively solve a Schrödinger equation with a different time-independent Hamiltonian operator in each of these subintervals. When discretized in space, the Schrödinger equation with the time-independent Hamiltonian operator obtained for each time subinterval can be recast as a classical linear autonomous Hamiltonian system corresponding to a system of coupled harmonic oscillators. The particular structure of this linear system allows us to construct a set of highly efficient schemes optimized for different precision requirements and time intervals. Sharp local error bounds are obtained for the solution of the linear autonomous Hamiltonian system considered in each time subinterval. Our schemes can be considered, in this setting, as polynomial approximations to the matrix exponential in a similar way as methods based on Chebyshev and Taylor polynomials. The theoretical analysis, supported by numerical experiments performed for different time-independent Hamiltonians, indicates that the new methods are more efficient than schemes based on Chebyshev polynomials for all tolerances and time interval lengths. The algorithm we present automatically selects, for each time subinterval, the most efficient splitting scheme (among several new optimized splitting methods) for a prescribed error tolerance and given estimates of the upper and lower bounds of the eigenvalues of the discretized version of the Hamiltonian operator.

  20. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478