Sample records for detection algorithms implemented

  1. A real time microcomputer implementation of sensor failure detection for turbofan engines

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1989-01-01

    An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.

  2. Algorithm architecture co-design for ultra low-power image sensor

    NASA Astrophysics Data System (ADS)

    Laforest, T.; Dupret, A.; Verdant, A.; Lattard, D.; Villard, P.

    2012-03-01

    In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection algorithms based on background estimation to find regions in movement are simple to implement and computationally efficient. To reduce power consumption, the background is estimated using a down sampled image formed of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.

  3. Advanced detection, isolation, and accommodation of sensor failures in turbofan engines: Real-time microcomputer implementation

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1990-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, an algorithm was developed which detects, isolates, and accommodates sensor failures by using analytical redundancy. The performance of this algorithm was evaluated on a real time engine simulation and was demonstrated on a full scale F100 turbofan engine. The real time implementation of the algorithm is described. The implementation used state-of-the-art microprocessor hardware and software, including parallel processing and high order language programming.

  4. Real-time implementation of logo detection on open source BeagleBoard

    NASA Astrophysics Data System (ADS)

    George, M.; Kehtarnavaz, N.; Estevez, L.

    2011-03-01

    This paper presents the real-time implementation of our previously developed logo detection and tracking algorithm on the open source BeagleBoard mobile platform. This platform has an OMAP processor that incorporates an ARM Cortex processor. The algorithm combines Scale Invariant Feature Transform (SIFT) with k-means clustering, online color calibration and moment invariants to robustly detect and track logos in video. Various optimization steps that are carried out to allow the real-time execution of the algorithm on BeagleBoard are discussed. The results obtained are compared to the PC real-time implementation results.

  5. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1983-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.

  6. A dual-processor multi-frequency implementation of the FINDS algorithm

    NASA Technical Reports Server (NTRS)

    Godiwala, Pankaj M.; Caglayan, Alper K.

    1987-01-01

    This report presents a parallel processing implementation of the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a dual processor configured target flight computer. First, a filter initialization scheme is presented which allows the no-fail filter (NFF) states to be initialized using the first iteration of the flight data. A modified failure isolation strategy, compatible with the new failure detection strategy reported earlier, is discussed and the performance of the new FDI algorithm is analyzed using flight recorded data from the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. The results show that low level MLS, IMU, and IAS sensor failures are detected and isolated instantaneously, while accelerometer and rate gyro failures continue to take comparatively longer to detect and isolate. The parallel implementation is accomplished by partitioning the FINDS algorithm into two parts: one based on the translational dynamics and the other based on the rotational kinematics. Finally, a multi-rate implementation of the algorithm is presented yielding significantly low execution times with acceptable estimation and FDI performance.

  7. Image based book cover recognition and retrieval

    NASA Astrophysics Data System (ADS)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  8. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  9. Hardware-software face detection system based on multi-block local binary patterns

    NASA Astrophysics Data System (ADS)

    Acasandrei, Laurentiu; Barriga, Angel

    2015-03-01

    Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.

  10. Infrared small target detection technology based on OpenCV

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Huang, Zhijian

    2013-05-01

    Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.

  11. Infrared small target detection technology based on OpenCV

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Huang, Zhijian

    2013-09-01

    Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.

  12. Lane Detection on the iPhone

    NASA Astrophysics Data System (ADS)

    Ren, Feixiang; Huang, Jinsheng; Terauchi, Mutsuhiro; Jiang, Ruyi; Klette, Reinhard

    A robust and efficient lane detection system is an essential component of Lane Departure Warning Systems, which are commonly used in many vision-based Driver Assistance Systems (DAS) in intelligent transportation. Various computation platforms have been proposed in the past few years for the implementation of driver assistance systems (e.g., PC, laptop, integrated chips, PlayStation, and so on). In this paper, we propose a new platform for the implementation of lane detection, which is based on a mobile phone (the iPhone). Due to physical limitations of the iPhone w.r.t. memory and computing power, a simple and efficient lane detection algorithm using a Hough transform is developed and implemented on the iPhone, as existing algorithms developed based on the PC platform are not suitable for mobile phone devices (currently). Experiments of the lane detection algorithm are made both on PC and on iPhone.

  13. A Motion Detection Algorithm Using Local Phase Information

    PubMed Central

    Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin

    2016-01-01

    Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882

  14. Automated Detection of Craters in Martian Satellite Imagery Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Norman, C. J.; Paxman, J.; Benedix, G. K.; Tan, T.; Bland, P. A.; Towner, M.

    2018-04-01

    Crater counting is used in determining surface age of planets. We propose improvements to martian Crater Detection Algorithms by implementing an end-to-end detection approach with the possibility of scaling the algorithm planet-wide.

  15. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  16. Advanced power system protection and incipient fault detection and protection of spaceborne power systems

    NASA Technical Reports Server (NTRS)

    Russell, B. Don

    1989-01-01

    This research concentrated on the application of advanced signal processing, expert system, and digital technologies for the detection and control of low grade, incipient faults on spaceborne power systems. The researchers have considerable experience in the application of advanced digital technologies and the protection of terrestrial power systems. This experience was used in the current contracts to develop new approaches for protecting the electrical distribution system in spaceborne applications. The project was divided into three distinct areas: (1) investigate the applicability of fault detection algorithms developed for terrestrial power systems to the detection of faults in spaceborne systems; (2) investigate the digital hardware and architectures required to monitor and control spaceborne power systems with full capability to implement new detection and diagnostic algorithms; and (3) develop a real-time expert operating system for implementing diagnostic and protection algorithms. Significant progress has been made in each of the above areas. Several terrestrial fault detection algorithms were modified to better adapt to spaceborne power system environments. Several digital architectures were developed and evaluated in light of the fault detection algorithms.

  17. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  18. RACER: Effective Race Detection Using AspectJ

    NASA Technical Reports Server (NTRS)

    Bodden, Eric; Havelund, Klaus

    2008-01-01

    The limits of coding with joint constraints on detected and undetected error rates Programming errors occur frequently in large software systems, and even more so if these systems are concurrent. In the past, researchers have developed specialized programs to aid programmers detecting concurrent programming errors such as deadlocks, livelocks, starvation and data races. In this work we propose a language extension to the aspect-oriented programming language AspectJ, in the form of three new built-in pointcuts, lock(), unlock() and may be Shared(), which allow programmers to monitor program events where locks are granted or handed back, and where values are accessed that may be shared amongst multiple Java threads. We decide thread-locality using a static thread-local objects analysis developed by others. Using the three new primitive pointcuts, researchers can directly implement efficient monitoring algorithms to detect concurrent programming errors online. As an example, we expose a new algorithm which we call RACER, an adoption of the well-known ERASER algorithm to the memory model of Java. We implemented the new pointcuts as an extension to the Aspect Bench Compiler, implemented the RACER algorithm using this language extension and then applied the algorithm to the NASA K9 Rover Executive. Our experiments proved our implementation very effective. In the Rover Executive RACER finds 70 data races. Only one of these races was previously known.We further applied the algorithm to two other multi-threaded programs written by Computer Science researchers, in which we found races as well.

  19. The design and hardware implementation of a low-power real-time seizure detection algorithm

    NASA Astrophysics Data System (ADS)

    Raghunathan, Shriram; Gupta, Sumeet K.; Ward, Matthew P.; Worth, Robert M.; Roy, Kaushik; Irazoqui, Pedro P.

    2009-10-01

    Epilepsy affects more than 1% of the world's population. Responsive neurostimulation is emerging as an alternative therapy for the 30% of the epileptic patient population that does not benefit from pharmacological treatment. Efficient seizure detection algorithms will enable closed-loop epilepsy prostheses by stimulating the epileptogenic focus within an early onset window. Critically, this is expected to reduce neuronal desensitization over time and lead to longer-term device efficacy. This work presents a novel event-based seizure detection algorithm along with a low-power digital circuit implementation. Hippocampal depth-electrode recordings from six kainate-treated rats are used to validate the algorithm and hardware performance in this preliminary study. The design process illustrates crucial trade-offs in translating mathematical models into hardware implementations and validates statistical optimizations made with empirical data analyses on results obtained using a real-time functioning hardware prototype. Using quantitatively predicted thresholds from the depth-electrode recordings, the auto-updating algorithm performs with an average sensitivity and selectivity of 95.3 ± 0.02% and 88.9 ± 0.01% (mean ± SEα = 0.05), respectively, on untrained data with a detection delay of 8.5 s [5.97, 11.04] from electrographic onset. The hardware implementation is shown feasible using CMOS circuits consuming under 350 nW of power from a 250 mV supply voltage from simulations on the MIT 180 nm SOI process.

  20. ASPeak: an abundance sensitive peak detection algorithm for RIP-Seq.

    PubMed

    Kucukural, Alper; Özadam, Hakan; Singh, Guramrit; Moore, Melissa J; Cenik, Can

    2013-10-01

    Unlike DNA, RNA abundances can vary over several orders of magnitude. Thus, identification of RNA-protein binding sites from high-throughput sequencing data presents unique challenges. Although peak identification in ChIP-Seq data has been extensively explored, there are few bioinformatics tools tailored for peak calling on analogous datasets for RNA-binding proteins. Here we describe ASPeak (abundance sensitive peak detection algorithm), an implementation of an algorithm that we previously applied to detect peaks in exon junction complex RNA immunoprecipitation in tandem experiments. Our peak detection algorithm yields stringent and robust target sets enabling sensitive motif finding and downstream functional analyses. ASPeak is implemented in Perl as a complete pipeline that takes bedGraph files as input. ASPeak implementation is freely available at https://sourceforge.net/projects/as-peak under the GNU General Public License. ASPeak can be run on a personal computer, yet is designed to be easily parallelizable. ASPeak can also run on high performance computing clusters providing efficient speedup. The documentation and user manual can be obtained from http://master.dl.sourceforge.net/project/as-peak/manual.pdf.

  1. An Accurate and Efficient Algorithm for Detection of Radio Bursts with an Unknown Dispersion Measure, for Single-dish Telescopes and Interferometers

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.

    2017-01-01

    Astronomical radio signals are subjected to phase dispersion while traveling through the interstellar medium. To optimally detect a short-duration signal within a frequency band, we have to precisely compensate for the unknown pulse dispersion, which is a computationally demanding task. We present the “fast dispersion measure transform” algorithm for optimal detection of such signals. Our algorithm has a low theoretical complexity of 2{N}f{N}t+{N}t{N}{{Δ }}{{log}}2({N}f), where Nf, Nt, and NΔ are the numbers of frequency bins, time bins, and dispersion measure bins, respectively. Unlike previously suggested fast algorithms, our algorithm conserves the sensitivity of brute-force dedispersion. Our tests indicate that this algorithm, running on a standard desktop computer and implemented in a high-level programming language, is already faster than the state-of-the-art dedispersion codes running on graphical processing units (GPUs). We also present a variant of the algorithm that can be efficiently implemented on GPUs. The latter algorithm’s computation and data-transport requirements are similar to those of a two-dimensional fast Fourier transform, indicating that incoherent dedispersion can now be considered a nonissue while planning future surveys. We further present a fast algorithm for sensitive detection of pulses shorter than the dispersive smearing limits of incoherent dedispersion. In typical cases, this algorithm is orders of magnitude faster than enumerating dispersion measures and coherently dedispersing by convolution. We analyze the computational complexity of pulsed signal searches by radio interferometers. We conclude that, using our suggested algorithms, maximally sensitive blind searches for dispersed pulses are feasible using existing facilities. We provide an implementation of these algorithms in Python and MATLAB.

  2. Real-time implementation of a multispectral mine target detection algorithm

    NASA Astrophysics Data System (ADS)

    Samson, Joseph W.; Witter, Lester J.; Kenton, Arthur C.; Holloway, John H., Jr.

    2003-09-01

    Spatial-spectral anomaly detection (the "RX Algorithm") has been exploited on the USMC's Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) and several associated technology base studies, and has been found to be a useful method for the automated detection of surface-emplaced antitank land mines in airborne multispectral imagery. RX is a complex image processing algorithm that involves the direct spatial convolution of a target/background mask template over each multispectral image, coupled with a spatially variant background spectral covariance matrix estimation and inversion. The RX throughput on the ATD was about 38X real time using a single Sun UltraSparc system. A goal to demonstrate RX in real-time was begun in FY01. We now report the development and demonstration of a Field Programmable Gate Array (FPGA) solution that achieves a real-time implementation of the RX algorithm at video rates using COBRA ATD data. The approach uses an Annapolis Microsystems Firebird PMC card containing a Xilinx XCV2000E FPGA with over 2,500,000 logic gates and 18MBytes of memory. A prototype system was configured using a Tek Microsystems VME board with dual-PowerPC G4 processors and two PMC slots. The RX algorithm was translated from its C programming implementation into the VHDL language and synthesized into gates that were loaded into the FPGA. The VHDL/synthesizer approach allows key RX parameters to be quickly changed and a new implementation automatically generated. Reprogramming the FPGA is done rapidly and in-circuit. Implementation of the RX algorithm in a single FPGA is a major first step toward achieving real-time land mine detection.

  3. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  4. Implementation of a real-time intersection accident detection system (Phase 1).

    DOT National Transportation Integrated Search

    2004-10-01

    The focus of this research is the feasibility study for the implementation of a real-time accident : detection system at intersections. After reviewing accident detection algorithms investigated in the prior : phase of the research, we explored schem...

  5. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    NASA Astrophysics Data System (ADS)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  6. Obstacle Detection Algorithms for Aircraft Navigation: Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.

  7. KB3D Reference Manual. Version 1.a

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Siminiceanu, Radu; Carreno, Victor A.; Dowek, Gilles

    2005-01-01

    This paper is a reference manual describing the implementation of the KB3D conflict detection and resolution algorithm. The algorithm has been implemented in the Java and C++ programming languages. The reference manual gives a short overview of the detection and resolution functions, the structural implementation of the program, inputs and outputs to the program, and describes how the program is used. Inputs to the program can be rectangular coordinates or geodesic coordinates. The reference manual also gives examples of conflict scenarios and the resolution outputs the program produces.

  8. A Comparative Evaluation of Anomaly Detection Algorithms for Maritime Video Surveillance

    DTIC Science & Technology

    2011-01-01

    of k-means clustering and the k- NN Localized p-value Estimator ( KNN -LPE). K-means is a popular distance-based clustering algorithm while KNN -LPE...implemented the sparse cluster identification rule we described in Section 3.1. 2. k-NN Localized p-value Estimator ( KNN -LPE): We implemented this using...Average Density ( KNN -NAD): This was implemented as described in Section 3.4. Algorithm Parameter Settings The global and local density-based anomaly

  9. ROS-based ground stereo vision detection: implementation and experiments.

    PubMed

    Hu, Tianjiang; Zhao, Boxin; Tang, Dengqing; Zhang, Daibing; Kong, Weiwei; Shen, Lincheng

    This article concentrates on open-source implementation on flying object detection in cluttered scenes. It is of significance for ground stereo-aided autonomous landing of unmanned aerial vehicles. The ground stereo vision guidance system is presented with details on system architecture and workflow. The Chan-Vese detection algorithm is further considered and implemented in the robot operating systems (ROS) environment. A data-driven interactive scheme is developed to collect datasets for parameter tuning and performance evaluating. The flying vehicle outdoor experiments capture the stereo sequential images dataset and record the simultaneous data from pan-and-tilt unit, onboard sensors and differential GPS. Experimental results by using the collected dataset validate the effectiveness of the published ROS-based detection algorithm.

  10. Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Bruton, William M.

    1987-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.

  11. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform

    PubMed Central

    Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance. PMID:29861711

  12. FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar

    NASA Astrophysics Data System (ADS)

    Azim, Noor ul; Jun, Wang

    2016-11-01

    Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.

  13. An FPGA-Based People Detection System

    NASA Astrophysics Data System (ADS)

    Nair, Vinod; Laprise, Pierre-Olivier; Clark, James J.

    2005-12-01

    This paper presents an FPGA-based system for detecting people from video. The system is designed to use JPEG-compressed frames from a network camera. Unlike previous approaches that use techniques such as background subtraction and motion detection, we use a machine-learning-based approach to train an accurate detector. We address the hardware design challenges involved in implementing such a detector, along with JPEG decompression, on an FPGA. We also present an algorithm that efficiently combines JPEG decompression with the detection process. This algorithm carries out the inverse DCT step of JPEG decompression only partially. Therefore, it is computationally more efficient and simpler to implement, and it takes up less space on the chip than the full inverse DCT algorithm. The system is demonstrated on an automated video surveillance application and the performance of both hardware and software implementations is analyzed. The results show that the system can detect people accurately at a rate of about[InlineEquation not available: see fulltext.] frames per second on a Virtex-II 2V1000 using a MicroBlaze processor running at[InlineEquation not available: see fulltext.], communicating with dedicated hardware over FSL links.

  14. Microcontroller-based real-time QRS detection.

    PubMed

    Sun, Y; Suppappola, S; Wrublewski, T A

    1992-01-01

    The authors describe the design of a system for real-time detection of QRS complexes in the electrocardiogram based on a single-chip microcontroller (Motorola 68HC811). A systematic analysis of the instrumentation requirements for QRS detection and of the various design techniques is also given. Detection algorithms using different nonlinear transforms for the enhancement of QRS complexes are evaluated by using the ECG database of the American Heart Association. The results show that the nonlinear transform involving multiplication of three adjacent, sign-consistent differences in the time domain gives a good performance and a quick response. When implemented with an appropriate sampling rate, this algorithm is also capable of rejecting pacemaker spikes. The eight-bit single-chip microcontroller provides sufficient throughput and shows a satisfactory performance. Implementation of multiple detection algorithms in the same system improves flexibility and reliability. The low chip count in the design also favors maintainability and cost-effectiveness.

  15. A fuzzy clustering algorithm to detect planar and quadric shapes

    NASA Technical Reports Server (NTRS)

    Krishnapuram, Raghu; Frigui, Hichem; Nasraoui, Olfa

    1992-01-01

    In this paper, we introduce a new fuzzy clustering algorithm to detect an unknown number of planar and quadric shapes in noisy data. The proposed algorithm is computationally and implementationally simple, and it overcomes many of the drawbacks of the existing algorithms that have been proposed for similar tasks. Since the clustering is performed in the original image space, and since no features need to be computed, this approach is particularly suited for sparse data. The algorithm may also be used in pattern recognition applications.

  16. Examining the Return on Investment of a Security Information and Event Management Solution in a Notional Department of Defense Network Environment

    DTIC Science & Technology

    2013-06-01

    collection are the facts that devices the lack encryption or compression methods and that the log file must be saved on the host system prior to transfer...time. Statistical correlation utilizes numerical algorithms to detect deviations from normal event levels and other routine activities (Chuvakin...can also assist in detecting low volume threats. Although easy and logical to implement, the implementation of statistical correlation algorithms

  17. Design Report for Low Power Acoustic Detector

    DTIC Science & Technology

    2013-08-01

    high speed integrated circuit (VHSIC) hardware description language ( VHDL ) implementation of both the HED and DCD detectors. Figures 4 and 5 show the...the hardware design, target detection algorithm design in both MATLAB and VHDL , and typical performance results. 15. SUBJECT TERMS Acoustic low...5 2.4 Algorithm Implementation ..............................................................................................6 3. Testing

  18. Soil Moisture Active Passive (SMAP) Project Algorithm Theoretical Basis Document SMAP L1B Radiometer Data Product: L1B_TB

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey; Mohammed, Priscilla; De Amici, Giovanni; Kim, Edward; Peng, Jinzheng; Ruf, Christopher; Hanna, Maher; Yueh, Simon; Entekhabi, Dara

    2016-01-01

    The purpose of the Soil Moisture Active Passive (SMAP) radiometer calibration algorithm is to convert Level 0 (L0) radiometer digital counts data into calibrated estimates of brightness temperatures referenced to the Earth's surface within the main beam. The algorithm theory in most respects is similar to what has been developed and implemented for decades for other satellite radiometers; however, SMAP includes two key features heretofore absent from most satellite borne radiometers: radio frequency interference (RFI) detection and mitigation, and measurement of the third and fourth Stokes parameters using digital correlation. The purpose of this document is to describe the SMAP radiometer and forward model, explain the SMAP calibration algorithm, including approximations, errors, and biases, provide all necessary equations for implementing the calibration algorithm and detail the RFI detection and mitigation process. Section 2 provides a summary of algorithm objectives and driving requirements. Section 3 is a description of the instrument and Section 4 covers the forward models, upon which the algorithm is based. Section 5 gives the retrieval algorithm and theory. Section 6 describes the orbit simulator, which implements the forward model and is the key for deriving antenna pattern correction coefficients and testing the overall algorithm.

  19. Finding topological center of a geographic space via road network

    NASA Astrophysics Data System (ADS)

    Gao, Liang; Miao, Yanan; Qin, Yuhao; Zhao, Xiaomei; Gao, Zi-You

    2015-02-01

    Previous studies show that the center of a geographic space is of great importance in urban and regional studies, including study of population distribution, urban growth modeling, and scaling properties of urban systems, etc. But how to well define and how to efficiently extract the center of a geographic space are still largely unknown. Recently, Jiang et al. have presented a definition of topological center by their block detection (BD) algorithm. Despite the fact that they first introduced the definition and discovered the 'true center', in human minds, their algorithm left several redundancies in its traversal process. Here, we propose an alternative road-cycle detection (RCD) algorithm to find the topological center, which extracts the outmost road-cycle recursively. To foster the application of the topological center in related research fields, we first reproduce the BD algorithm in Python (pyBD), then implement the RCD algorithm in two ways: the ArcPy implementation (arcRCD) and the Python implementation (pyRCD). After the experiments on twenty-four typical road networks, we find that the results of our RCD algorithm are consistent with those of Jiang's BD algorithm. We also find that the RCD algorithm is at least seven times more efficient than the BD algorithm on all the ten typical road networks.

  20. Implementation of accelerometer sensor module and fall detection monitoring system based on wireless sensor network.

    PubMed

    Lee, Youngbum; Kim, Jinkwon; Son, Muntak; Lee, Myoungho

    2007-01-01

    This research implements wireless accelerometer sensor module and algorithm to determine wearer's posture, activity and fall. Wireless accelerometer sensor module uses ADXL202, 2-axis accelerometer sensor (Analog Device). And using wireless RF module, this module measures accelerometer signal and shows the signal at ;Acceloger' viewer program in PC. ADL algorithm determines posture, activity and fall that activity is determined by AC component of accelerometer signal and posture is determined by DC component of accelerometer signal. Those activity and posture include standing, sitting, lying, walking, running, etc. By the experiment for 30 subjects, the performance of implemented algorithm was assessed, and detection rate for postures, motions and subjects was calculated. Lastly, using wireless sensor network in experimental space, subject's postures, motions and fall monitoring system was implemented. By the simulation experiment for 30 subjects, 4 kinds of activity, 3 times, fall detection rate was calculated. In conclusion, this system can be application to patients and elders for activity monitoring and fall detection and also sports athletes' exercise measurement and pattern analysis. And it can be expected to common person's exercise training and just plaything for entertainment.

  1. Algorithms for the detection of chewing behavior in dietary monitoring applications

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Helal, Abdelsalam; Mendez-Vasquez, Andres

    2009-08-01

    The detection of food consumption is key to the implementation of successful behavior modification in support of dietary monitoring and therapy, for example, during the course of controlling obesity, diabetes, or cardiovascular disease. Since the vast majority of humans consume food via mastication (chewing), we have designed an algorithm that automatically detects chewing behaviors in surveillance video of a person eating. Our algorithm first detects the mouth region, then computes the spatiotemporal frequency spectrum of a small perioral region (including the mouth). Spectral data are analyzed to determine the presence of periodic motion that characterizes chewing. A classifier is then applied to discriminate different types of chewing behaviors. Our algorithm was tested on seven volunteers, whose behaviors included chewing with mouth open, chewing with mouth closed, talking, static face presentation (control case), and moving face presentation. Early test results show that the chewing behaviors induce a temporal frequency peak at 0.5Hz to 2.5Hz, which is readily detected using a distance-based classifier. Computational cost is analyzed for implementation on embedded processing nodes, for example, in a healthcare sensor network. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm, and its estimated error. It is shown that chewing detection is possible within a computationally efficient, accurate, and subject-independent framework.

  2. Improvement and implementation for Canny edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  3. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  4. Detecting Outliers in Factor Analysis Using the Forward Search Algorithm

    ERIC Educational Resources Information Center

    Mavridis, Dimitris; Moustaki, Irini

    2008-01-01

    In this article we extend and implement the forward search algorithm for identifying atypical subjects/observations in factor analysis models. The forward search has been mainly developed for detecting aberrant observations in regression models (Atkinson, 1994) and in multivariate methods such as cluster and discriminant analysis (Atkinson, Riani,…

  5. A Fast, Automatic Segmentation Algorithm for Locating and Delineating Touching Cell Boundaries in Imaged Histopathology

    PubMed Central

    Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin

    2013-01-01

    Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139

  6. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; ...

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  7. A real time QRS detection using delay-coordinate mapping for the microcontroller implementation.

    PubMed

    Lee, Jeong-Whan; Kim, Kyeong-Seop; Lee, Bongsoo; Lee, Byungchae; Lee, Myoung-Ho

    2002-01-01

    In this article, we propose a new algorithm using the characteristics of reconstructed phase portraits by delay-coordinate mapping utilizing lag rotundity for a real-time detection of QRS complexes in ECG signals. In reconstructing phase portrait the mapping parameters, time delay, and mapping dimension play important roles in shaping of portraits drawn in a new dimensional space. Experimentally, the optimal mapping time delay for detection of QRS complexes turned out to be 20 ms. To explore the meaning of this time delay and the proper mapping dimension, we applied a fill factor, mutual information, and autocorrelation function algorithm that were generally used to analyze the chaotic characteristics of sampled signals. From these results, we could find the fact that the performance of our proposed algorithms relied mainly on the geometrical property such as an area of the reconstructed phase portrait. For the real application, we applied our algorithm for designing a small cardiac event recorder. This system was to record patients' ECG and R-R intervals for 1 h to investigate HRV characteristics of the patients who had vasovagal syncope symptom and for the evaluation, we implemented our algorithm in C language and applied to MIT/BIH arrhythmia database of 48 subjects. Our proposed algorithm achieved a 99.58% detection rate of QRS complexes.

  8. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    PubMed

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Clustering analysis of moving target signatures

    NASA Astrophysics Data System (ADS)

    Martone, Anthony; Ranney, Kenneth; Innocenti, Roberto

    2010-04-01

    Previously, we developed a moving target indication (MTI) processing approach to detect and track slow-moving targets inside buildings, which successfully detected moving targets (MTs) from data collected by a low-frequency, ultra-wideband radar. Our MTI algorithms include change detection, automatic target detection (ATD), clustering, and tracking. The MTI algorithms can be implemented in a real-time or near-real-time system; however, a person-in-the-loop is needed to select input parameters for the clustering algorithm. Specifically, the number of clusters to input into the cluster algorithm is unknown and requires manual selection. A critical need exists to automate all aspects of the MTI processing formulation. In this paper, we investigate two techniques that automatically determine the number of clusters: the adaptive knee-point (KP) algorithm and the recursive pixel finding (RPF) algorithm. The KP algorithm is based on a well-known heuristic approach for determining the number of clusters. The RPF algorithm is analogous to the image processing, pixel labeling procedure. Both algorithms are used to analyze the false alarm and detection rates of three operational scenarios of personnel walking inside wood and cinderblock buildings.

  10. Novel Method to Efficiently Create an mHealth App: Implementation of a Real-Time Electrocardiogram R Peak Detector.

    PubMed

    Gliner, Vadim; Behar, Joachim; Yaniv, Yael

    2018-05-22

    In parallel to the introduction of mobile communication devices with high computational power and internet connectivity, high-quality and low-cost health sensors have also become available. However, although the technology does exist, no clinical mobile system has been developed to monitor the R peaks from electrocardiogram recordings in real time with low false positive and low false negative detection. Implementation of a robust electrocardiogram R peak detector for various arrhythmogenic events has been hampered by the lack of an efficient design that will conserve battery power without reducing algorithm complexity or ease of implementation. Our goals in this paper are (1) to evaluate the suitability of the MATLAB Mobile platform for mHealth apps and whether it can run on any phone system, and (2) to embed in the MATLAB Mobile platform a real-time electrocardiogram R peak detector with low false positive and low false negative detection in the presence of the most frequent arrhythmia, atrial fibrillation. We implemented an innovative R peak detection algorithm that deals with motion artifacts, electrical drift, breathing oscillations, electrical spikes, and environmental noise by low-pass filtering. It also fixes the signal polarity and deals with premature beats by heuristic filtering. The algorithm was trained on the annotated non-atrial fibrillation MIT-BIH Arrhythmia Database and tested on the atrial fibrillation MIT-BIH Arrhythmia Database. Finally, the algorithm was implemented on mobile phones connected to a mobile electrocardiogram device using the MATLAB Mobile platform. Our algorithm precisely detected the R peaks with a sensitivity of 99.7% and positive prediction of 99.4%. These results are superior to some state-of-the-art algorithms. The algorithm performs similarly on atrial fibrillation and non-atrial fibrillation patient data. Using MATLAB Mobile, we ran our algorithm in less than an hour on both the iOS and Android system. Our app can accurately analyze 1 minute of real-time electrocardiogram signals in less than 1 second on a mobile phone. Accurate real-time identification of heart rate on a beat-to-beat basis in the presence of noise and atrial fibrillation events using a mobile phone is feasible. ©Vadim Gliner, Joachim Behar, Yael Yaniv. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 22.05.2018.

  11. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.

    PubMed

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.

  12. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

    PubMed Central

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

    2015-01-01

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

  13. DAIDALUS: Detect and Avoid Alerting Logic for Unmanned Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Narkawicz, Anthony; Hagen, George; Upchurch, Jason; Dutle, Aaron; Consiglio, Maria; Chamberlain, James

    2015-01-01

    This paper presents DAIDALUS (Detect and Avoid Alerting Logic for Unmanned Systems), a reference implementation of a detect and avoid concept intended to support the integration of Unmanned Aircraft Systems into civil airspace. DAIDALUS consists of self-separation and alerting algorithms that provide situational awareness to UAS remote pilots. These algorithms have been formally specified in a mathematical notation and verified for correctness in an interactive theorem prover. The software implementation has been verified against the formal models and validated against multiple stressing cases jointly developed by the US Air Force Research Laboratory, MIT Lincoln Laboratory, and NASA. The DAIDALUS reference implementation is currently under consideration for inclusion in the appendices to the Minimum Operational Performance Standards for Unmanned Aircraft Systems presently being developed by RTCA Special Committee 228.

  14. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  15. A Performance Evaluation of Lightning-NO Algorithms in CMAQ

    EPA Science Inventory

    In the Community Multiscale Air Quality (CMAQv5.2) model, we have implemented two algorithms for lightning NO production; one algorithm is based on the hourly observed cloud-to-ground lightning strike data from National Lightning Detection Network (NLDN) to replace the previous m...

  16. Revisiting QRS detection methodologies for portable, wearable, battery-operated, and wireless ECG systems.

    PubMed

    Elgendi, Mohamed; Eskofier, Björn; Dokos, Socrates; Abbott, Derek

    2014-01-01

    Cardiovascular diseases are the number one cause of death worldwide. Currently, portable battery-operated systems such as mobile phones with wireless ECG sensors have the potential to be used in continuous cardiac function assessment that can be easily integrated into daily life. These portable point-of-care diagnostic systems can therefore help unveil and treat cardiovascular diseases. The basis for ECG analysis is a robust detection of the prominent QRS complex, as well as other ECG signal characteristics. However, it is not clear from the literature which ECG analysis algorithms are suited for an implementation on a mobile device. We investigate current QRS detection algorithms based on three assessment criteria: 1) robustness to noise, 2) parameter choice, and 3) numerical efficiency, in order to target a universal fast-robust detector. Furthermore, existing QRS detection algorithms may provide an acceptable solution only on small segments of ECG signals, within a certain amplitude range, or amid particular types of arrhythmia and/or noise. These issues are discussed in the context of a comparison with the most conventional algorithms, followed by future recommendations for developing reliable QRS detection schemes suitable for implementation on battery-operated mobile devices.

  17. Revisiting QRS Detection Methodologies for Portable, Wearable, Battery-Operated, and Wireless ECG Systems

    PubMed Central

    Elgendi, Mohamed; Eskofier, Björn; Dokos, Socrates; Abbott, Derek

    2014-01-01

    Cardiovascular diseases are the number one cause of death worldwide. Currently, portable battery-operated systems such as mobile phones with wireless ECG sensors have the potential to be used in continuous cardiac function assessment that can be easily integrated into daily life. These portable point-of-care diagnostic systems can therefore help unveil and treat cardiovascular diseases. The basis for ECG analysis is a robust detection of the prominent QRS complex, as well as other ECG signal characteristics. However, it is not clear from the literature which ECG analysis algorithms are suited for an implementation on a mobile device. We investigate current QRS detection algorithms based on three assessment criteria: 1) robustness to noise, 2) parameter choice, and 3) numerical efficiency, in order to target a universal fast-robust detector. Furthermore, existing QRS detection algorithms may provide an acceptable solution only on small segments of ECG signals, within a certain amplitude range, or amid particular types of arrhythmia and/or noise. These issues are discussed in the context of a comparison with the most conventional algorithms, followed by future recommendations for developing reliable QRS detection schemes suitable for implementation on battery-operated mobile devices. PMID:24409290

  18. Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang; Thomas, Maikael A.

    We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streamsmore » by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).« less

  19. Image processing improvement for optical observations of space debris with the TAROT telescopes

    NASA Astrophysics Data System (ADS)

    Thiebaut, C.; Theron, S.; Richard, P.; Blanchet, G.; Klotz, A.; Boër, M.

    2016-07-01

    CNES is involved in the Inter-Agency Space Debris Coordination Committee (IADC) and is observing space debris with two robotic ground based fully automated telescopes called TAROT and operated by the CNRS. An image processing algorithm devoted to debris detection in geostationary orbit is implemented in the standard pipeline. Nevertheless, this algorithm is unable to deal with debris tracking mode images, this mode being the preferred one for debris detectability. We present an algorithm improvement for this mode and give results in terms of false detection rate.

  20. Efficient Acceleration of the Pair-HMMs Forward Algorithm for GATK HaplotypeCaller on Graphics Processing Units.

    PubMed

    Ren, Shanshan; Bertels, Koen; Al-Ars, Zaid

    2018-01-01

    GATK HaplotypeCaller (HC) is a popular variant caller, which is widely used to identify variants in complex genomes. However, due to its high variants detection accuracy, it suffers from long execution time. In GATK HC, the pair-HMMs forward algorithm accounts for a large percentage of the total execution time. This article proposes to accelerate the pair-HMMs forward algorithm on graphics processing units (GPUs) to improve the performance of GATK HC. This article presents several GPU-based implementations of the pair-HMMs forward algorithm. It also analyzes the performance bottlenecks of the implementations on an NVIDIA Tesla K40 card with various data sets. Based on these results and the characteristics of GATK HC, we are able to identify the GPU-based implementations with the highest performance for the various analyzed data sets. Experimental results show that the GPU-based implementations of the pair-HMMs forward algorithm achieve a speedup of up to 5.47× over existing GPU-based implementations.

  1. Two-Photon Excitation STED Microscopy with Time-Gated Detection

    PubMed Central

    Coto Hernández, Iván; Castello, Marco; Lanzanò, Luca; d’Amora, Marta; Bianchini, Paolo; Diaspro, Alberto; Vicidomini, Giuseppe

    2016-01-01

    We report on a novel two-photon excitation stimulated emission depletion (2PE-STED) microscope based on time-gated detection. The time-gated detection allows for the effective silencing of the fluorophores using moderate stimulated emission beam intensity. This opens the possibility of implementing an efficient 2PE-STED microscope with a stimulated emission beam running in a continuous-wave. The continuous-wave stimulated emission beam tempers the laser architecture’s complexity and cost, but the time-gated detection degrades the signal-to-noise ratio (SNR) and signal-to-background ratio (SBR) of the image. We recover the SNR and the SBR through a multi-image deconvolution algorithm. Indeed, the algorithm simultaneously reassigns early-photons (normally discarded by the time-gated detection) to their original positions and removes the background induced by the stimulated emission beam. We exemplify the benefits of this implementation by imaging sub-cellular structures. Finally, we discuss of the extension of this algorithm to future all-pulsed 2PE-STED implementationd based on time-gated detection and a nanosecond laser source. PMID:26757892

  2. Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis

    PubMed Central

    Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2015-01-01

    Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408

  3. Simple Common Plane contact detection algorithm for FE/FD methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vorobiev, O

    2006-07-19

    Common-plane (CP) algorithm is widely used in Discrete Element Method (DEM) to model contact forces between interacting particles or blocks. A new simple contact detection algorithm is proposed to model contacts in FE/FD methods which is similar to the CP algorithm. The CP is defined as a plane separating interacting faces of FE/FD mesh instead of blocks or particles in the original CP method. The method does not require iterations. It is very robust and easy to implement both in 2D and 3D case.

  4. Review of electronic-nose technologies and algorithms to detect hazardous chemicals in the environment

    Treesearch

    Alphus D. Wilson

    2012-01-01

    Novel mobile electronic-nose (e-nose) devices and algorithms capable of real-time detection of industrial and municipal pollutants, released from point-sources, recently have been developed by scientists worldwide that are useful for monitoring specific environmental-pollutant levels for enforcement and implementation of effective pollution-abatement programs. E-nose...

  5. Toward detecting deception in intelligent systems

    NASA Astrophysics Data System (ADS)

    Santos, Eugene, Jr.; Johnson, Gregory, Jr.

    2004-08-01

    Contemporary decision makers often must choose a course of action using knowledge from several sources. Knowledge may be provided from many diverse sources including electronic sources such as knowledge-based diagnostic or decision support systems or through data mining techniques. As the decision maker becomes more dependent on these electronic information sources, detecting deceptive information from these sources becomes vital to making a correct, or at least more informed, decision. This applies to unintentional disinformation as well as intentional misinformation. Our ongoing research focuses on employing models of deception and deception detection from the fields of psychology and cognitive science to these systems as well as implementing deception detection algorithms for probabilistic intelligent systems. The deception detection algorithms are used to detect, classify and correct attempts at deception. Algorithms for detecting unexpected information rely upon a prediction algorithm from the collaborative filtering domain to predict agent responses in a multi-agent system.

  6. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Krauthammer, Prof. Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manuallymore » labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.« less

  7. An efficient parallel termination detection algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Ofmore » these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.« less

  8. Investigating prior probabilities in a multiple hypothesis test for use in space domain awareness

    NASA Astrophysics Data System (ADS)

    Hardy, Tyler J.; Cain, Stephen C.

    2016-05-01

    The goal of this research effort is to improve Space Domain Awareness (SDA) capabilities of current telescope systems through improved detection algorithms. Ground-based optical SDA telescopes are often spatially under-sampled, or aliased. This fact negatively impacts the detection performance of traditionally proposed binary and correlation-based detection algorithms. A Multiple Hypothesis Test (MHT) algorithm has been previously developed to mitigate the effects of spatial aliasing. This is done by testing potential Resident Space Objects (RSOs) against several sub-pixel shifted Point Spread Functions (PSFs). A MHT has been shown to increase detection performance for the same false alarm rate. In this paper, the assumption of a priori probability used in a MHT algorithm is investigated. First, an analysis of the pixel decision space is completed to determine alternate hypothesis prior probabilities. These probabilities are then implemented into a MHT algorithm, and the algorithm is then tested against previous MHT algorithms using simulated RSO data. Results are reported with Receiver Operating Characteristic (ROC) curves and probability of detection, Pd, analysis.

  9. A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data.

    PubMed

    Gregoire, John M; Dale, Darren; van Dover, R Bruce

    2011-01-01

    Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta-theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.

  10. Robust crop and weed segmentation under uncontrolled outdoor illumination.

    PubMed

    Jeon, Hong Y; Tian, Lei F; Zhu, Heping

    2011-01-01

    An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).

  11. Two stage algorithm vs commonly used approaches for the suspect screening of complex environmental samples analyzed via liquid chromatography high resolution time of flight mass spectroscopy: A test study.

    PubMed

    Samanipour, Saer; Baz-Lomba, Jose A; Alygizakis, Nikiforos A; Reid, Malcolm J; Thomaidis, Nikolaos S; Thomas, Kevin V

    2017-06-09

    LC-HR-QTOF-MS recently has become a commonly used approach for the analysis of complex samples. However, identification of small organic molecules in complex samples with the highest level of confidence is a challenging task. Here we report on the implementation of a two stage algorithm for LC-HR-QTOF-MS datasets. We compared the performances of the two stage algorithm, implemented via NIVA_MZ_Analyzer™, with two commonly used approaches (i.e. feature detection and XIC peak picking, implemented via UNIFI by Waters and TASQ by Bruker, respectively) for the suspect analysis of four influent wastewater samples. We first evaluated the cross platform compatibility of LC-HR-QTOF-MS datasets generated via instruments from two different manufacturers (i.e. Waters and Bruker). Our data showed that with an appropriate spectral weighting function the spectra recorded by the two tested instruments are comparable for our analytes. As a consequence, we were able to perform full spectral comparison between the data generated via the two studied instruments. Four extracts of wastewater influent were analyzed for 89 analytes, thus 356 detection cases. The analytes were divided into 158 detection cases of artificial suspect analytes (i.e. verified by target analysis) and 198 true suspects. The two stage algorithm resulted in a zero rate of false positive detection, based on the artificial suspect analytes while producing a rate of false negative detection of 0.12. For the conventional approaches, the rates of false positive detection varied between 0.06 for UNIFI and 0.15 for TASQ. The rates of false negative detection for these methods ranged between 0.07 for TASQ and 0.09 for UNIFI. The effect of background signal complexity on the two stage algorithm was evaluated through the generation of a synthetic signal. We further discuss the boundaries of applicability of the two stage algorithm. The importance of background knowledge and experience in evaluating the reliability of results during the suspect screening was evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Community detection in complex networks by using membrane algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Chuang; Fan, Linan; Liu, Zhou; Dai, Xiang; Xu, Jiamei; Chang, Baoren

    Community detection in complex networks is a key problem of network analysis. In this paper, a new membrane algorithm is proposed to solve the community detection in complex networks. The proposed algorithm is based on membrane systems, which consists of objects, reaction rules, and a membrane structure. Each object represents a candidate partition of a complex network, and the quality of objects is evaluated according to network modularity. The reaction rules include evolutionary rules and communication rules. Evolutionary rules are responsible for improving the quality of objects, which employ the differential evolutionary algorithm to evolve objects. Communication rules implement the information exchanged among membranes. Finally, the proposed algorithm is evaluated on synthetic, real-world networks with real partitions known and the large-scaled networks with real partitions unknown. The experimental results indicate the superior performance of the proposed algorithm in comparison with other experimental algorithms.

  13. A Real-Time System for Lane Detection Based on FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Xiao, Jing; Li, Shutao; Sun, Bin

    2016-12-01

    This paper presents a real-time lane detection system including edge detection and improved Hough Transform based lane detection algorithm and its hardware implementation with field programmable gate array (FPGA) and digital signal processor (DSP). Firstly, gradient amplitude and direction information are combined to extract lane edge information. Then, the information is used to determine the region of interest. Finally, the lanes are extracted by using improved Hough Transform. The image processing module of the system consists of FPGA and DSP. Particularly, the algorithms implemented in FPGA are working in pipeline and processing in parallel so that the system can run in real-time. In addition, DSP realizes lane line extraction and display function with an improved Hough Transform. The experimental results show that the proposed system is able to detect lanes under different road situations efficiently and effectively.

  14. Recursive least squares background prediction of univariate syndromic surveillance data

    PubMed Central

    2009-01-01

    Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the residuals of the RLS forecasts. We compare the detection performance of this algorithm to the W2 method recently implemented at CDC. The modified RLS method gives consistently better sensitivity at multiple background alert rates, and we recommend that it should be considered for routine application in bio-surveillance systems. PMID:19149886

  15. Recursive least squares background prediction of univariate syndromic surveillance data.

    PubMed

    Najmi, Amir-Homayoon; Burkom, Howard

    2009-01-16

    Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the residuals of the RLS forecasts. We compare the detection performance of this algorithm to the W2 method recently implemented at CDC. The modified RLS method gives consistently better sensitivity at multiple background alert rates, and we recommend that it should be considered for routine application in bio-surveillance systems.

  16. Automated System for Early Breast Cancer Detection in Mammograms

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Kim, Dong W.; Christens-Barry, William A.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.

    1993-01-01

    The increasing demand on mammographic screening for early breast cancer detection, and the subtlety of early breast cancer signs on mammograms, suggest an automated image processing system that can serve as a diagnostic aid in radiology clinics. We present a fully automated algorithm for detecting clusters of microcalcifications that are the most common signs of early, potentially curable breast cancer. By using the contour map of the mammogram, the algorithm circumvents some of the difficulties encountered with standard image processing methods. The clinical implementation of an automated instrument based on this algorithm is also discussed.

  17. Implementation of several mathematical algorithms to breast tissue density classification

    NASA Astrophysics Data System (ADS)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-02-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.

  18. User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.

    1988-01-01

    Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.

  19. An improved non-uniformity correction algorithm and its hardware implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Rong, Shenghui; Zhou, Huixin; Wen, Zhigang; Qin, Hanlin; Qian, Kun; Cheng, Kuanhong

    2017-09-01

    The Non-uniformity of Infrared Focal Plane Arrays (IRFPA) severely degrades the infrared image quality. An effective non-uniformity correction (NUC) algorithm is necessary for an IRFPA imaging and application system. However traditional scene-based NUC algorithm suffers the image blurring and artificial ghosting. In addition, few effective hardware platforms have been proposed to implement corresponding NUC algorithms. Thus, this paper proposed an improved neural-network based NUC algorithm by the guided image filter and the projection-based motion detection algorithm. First, the guided image filter is utilized to achieve the accurate desired image to decrease the artificial ghosting. Then a projection-based moving detection algorithm is utilized to determine whether the correction coefficients should be updated or not. In this way the problem of image blurring can be overcome. At last, an FPGA-based hardware design is introduced to realize the proposed NUC algorithm. A real and a simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. Experimental results indicated that the proposed NUC algorithm can effectively eliminate the fix pattern noise with less image blurring and artificial ghosting. The proposed hardware design takes less logic elements in FPGA and spends less clock cycles to process one frame of image.

  20. Fast detection of vascular plaque in optical coherence tomography images using a reduced feature set

    NASA Astrophysics Data System (ADS)

    Prakash, Ammu; Ocana Macias, Mariano; Hewko, Mark; Sowa, Michael; Sherif, Sherif

    2018-03-01

    Optical coherence tomography (OCT) images are capable of detecting vascular plaque by using the full set of 26 Haralick textural features and a standard K-means clustering algorithm. However, the use of the full set of 26 textural features is computationally expensive and may not be feasible for real time implementation. In this work, we identified a reduced set of 3 textural feature which characterizes vascular plaque and used a generalized Fuzzy C-means clustering algorithm. Our work involves three steps: 1) the reduction of a full set 26 textural feature to a reduced set of 3 textural features by using genetic algorithm (GA) optimization method 2) the implementation of an unsupervised generalized clustering algorithm (Fuzzy C-means) on the reduced feature space, and 3) the validation of our results using histology and actual photographic images of vascular plaque. Our results show an excellent match with histology and actual photographic images of vascular tissue. Therefore, our results could provide an efficient pre-clinical tool for the detection of vascular plaque in real time OCT imaging.

  1. Performance Analysis of a Hardware Implemented Complex Signal Kurtosis Radio-Frequency Interference Detector

    NASA Technical Reports Server (NTRS)

    Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark

    2016-01-01

    Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves. The complex kurtosis algorithm has the potential to reduce data rate due to onboard processing in addition to improving RFI detection performance.

  2. Colour based fire detection method with temporal intensity variation filtration

    NASA Astrophysics Data System (ADS)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  3. Using Runtime Analysis to Guide Model Checking of Java Programs

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes how two runtime analysis algorithms, an existing data race detection algorithm and a new deadlock detection algorithm, have been implemented to analyze Java programs. Runtime analysis is based on the idea of executing the program once. and observing the generated run to extract various kinds of information. This information can then be used to predict whether other different runs may violate some properties of interest, in addition of course to demonstrate whether the generated run itself violates such properties. These runtime analyses can be performed stand-alone to generate a set of warnings. It is furthermore demonstrated how these warnings can be used to guide a model checker, thereby reducing the search space. The described techniques have been implemented in the b e grown Java model checker called PathFinder.

  4. Using Block-local Atomicity to Detect Stale-value Concurrency Errors

    NASA Technical Reports Server (NTRS)

    Artho, Cyrille; Havelund, Klaus; Biere, Armin

    2004-01-01

    Data races do not cover all kinds of concurrency errors. This paper presents a data-flow-based technique to find stale-value errors, which are not found by low-level and high-level data race algorithms. Stale values denote copies of shared data where the copy is no longer synchronized. The algorithm to detect such values works as a consistency check that does not require any assumptions or annotations of the program. It has been implemented as a static analysis in JNuke. The analysis is sound and requires only a single execution trace if implemented as a run-time checking algorithm. Being based on an analysis of Java bytecode, it encompasses the full program semantics, including arbitrarily complex expressions. Related techniques are more complex and more prone to over-reporting.

  5. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    PubMed Central

    Xu, Songhua; Krauthammer, Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper’s key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. In this paper, we demonstrate that a projection histogram-based text detection approach is well suited for text detection in biomedical images, with a performance of F score of .60. The approach performs better than comparable approaches for text detection. Further, we show that the iterative application of the algorithm is boosting overall detection performance. A C++ implementation of our algorithm is freely available through email request for academic use. PMID:20887803

  6. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  7. Development of an Algorithm for Satellite Remote Sensing of Sea and Lake Ice

    NASA Astrophysics Data System (ADS)

    Dorofy, Peter T.

    Satellite remote sensing of snow and ice has a long history. The traditional method for many snow and ice detection algorithms has been the use of the Normalized Difference Snow Index (NDSI). This manuscript is composed of two parts. Chapter 1, Development of a Mid-Infrared Sea and Lake Ice Index (MISI) using the GOES Imager, discusses the desirability, development, and implementation of alternative index for an ice detection algorithm, application of the algorithm to the detection of lake ice, and qualitative validation against other ice mapping products; such as, the Ice Mapping System (IMS). Chapter 2, Application of Dynamic Threshold in a Lake Ice Detection Algorithm, continues with a discussion of the development of a method that considers the variable viewing and illumination geometry of observations throughout the day. The method is an alternative to Bidirectional Reflectance Distribution Function (BRDF) models. Evaluation of the performance of the algorithm is introduced by aggregating classified pixels within geometrical boundaries designated by IMS and obtaining sensitivity and specificity statistical measures.

  8. Memetic algorithms for de novo motif-finding in biomedical sequences.

    PubMed

    Bi, Chengpeng

    2012-09-01

    The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary microRNA sequences. The memetic motif-finding algorithm is effectively designed and implemented, and its applications demonstrate it is not only time-efficient, but also exhibits excellent performance while compared with other popular algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. A new approach to optic disc detection in human retinal images using the firefly algorithm.

    PubMed

    Rahebi, Javad; Hardalaç, Fırat

    2016-03-01

    There are various methods and algorithms to detect the optic discs in retinal images. In recent years, much attention has been given to the utilization of the intelligent algorithms. In this paper, we present a new automated method of optic disc detection in human retinal images using the firefly algorithm. The firefly intelligent algorithm is an emerging intelligent algorithm that was inspired by the social behavior of fireflies. The population in this algorithm includes the fireflies, each of which has a specific rate of lighting or fitness. In this method, the insects are compared two by two, and the less attractive insects can be observed to move toward the more attractive insects. Finally, one of the insects is selected as the most attractive, and this insect presents the optimum response to the problem in question. Here, we used the light intensity of the pixels of the retinal image pixels instead of firefly lightings. The movement of these insects due to local fluctuations produces different light intensity values in the images. Because the optic disc is the brightest area in the retinal images, all of the insects move toward brightest area and thus specify the location of the optic disc in the image. The results of implementation show that proposed algorithm could acquire an accuracy rate of 100 % in DRIVE dataset, 95 % in STARE dataset, and 94.38 % in DiaRetDB1 dataset. The results of implementation reveal high capability and accuracy of proposed algorithm in the detection of the optic disc from retinal images. Also, recorded required time for the detection of the optic disc in these images is 2.13 s for DRIVE dataset, 2.81 s for STARE dataset, and 3.52 s for DiaRetDB1 dataset accordingly. These time values are average value.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  11. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    PubMed

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Implementation of a near real-time burned area detection algorithm calibrated for VIIRS imagery

    Treesearch

    Brenna Schwert; Carl Albury; Jess Clark; Abigail Schaaf; Shawn Urbanski; Bryce Nordgren

    2016-01-01

    There is a need to implement methods for rapid burned area detection using a suitable replacement for Moderate Resolution Imaging Spectroradiometer (MODIS) imagery to meet future mapping and monitoring needs (Roy and Boschetti 2009, Tucker and Yager 2011). The Visible Infrared Imaging Radiometer Suite (VIIRS) sensor onboard the Suomi-National Polar-orbiting Partnership...

  13. A new pivoting and iterative text detection algorithm for biomedical images.

    PubMed

    Xu, Songhua; Krauthammer, Michael

    2010-12-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Implementation of an Algorithm for Prosthetic Joint Infection: Deviations and Problems.

    PubMed

    Mühlhofer, Heinrich M L; Kanz, Karl-Georg; Pohlig, Florian; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; von Eisenhart-Rothe, Ruediger; Schauwecker, Johannes

    The outcome of revision surgery in arthroplasty is based on a precise diagnosis. In addition, the treatment varies based on whether the prosthetic failure is caused by aseptic or septic loosening. Algorithms can help to identify periprosthetic joint infections (PJI) and standardize diagnostic steps, however, algorithms tend to oversimplify the treatment of complex cases. We conducted a process analysis during the implementation of a PJI algorithm to determine problems and deviations associated with the implementation of this algorithm. Fifty patients who were treated after implementing a standardized algorithm were monitored retrospectively. Their treatment plans and diagnostic cascades were analyzed for deviations from the implemented algorithm. Each diagnostic procedure was recorded, compared with the algorithm, and evaluated statistically. We detected 52 deviations while treating 50 patients. In 25 cases, no discrepancy was observed. Synovial fluid aspiration was not performed in 31.8% of patients (95% confidence interval [CI], 18.1%-45.6%), while white blood cell counts (WBCs) and neutrophil differentiation were assessed in 54.5% of patients (95% CI, 39.8%-69.3%). We also observed that the prolonged incubation of cultures was not requested in 13.6% of patients (95% CI, 3.5%-23.8%). In seven of 13 cases (63.6%; 95% CI, 35.2%-92.1%), arthroscopic biopsy was performed; 6 arthroscopies were performed in discordance with the algorithm (12%; 95% CI, 3%-21%). Self-critical analysis of diagnostic processes and monitoring of deviations using algorithms are important and could increase the quality of treatment by revealing recurring faults.

  15. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor)

    2015-01-01

    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

  16. Epidemic failure detection and consensus for extreme parallelism

    DOE PAGES

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...

    2017-02-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  17. Robust Crop and Weed Segmentation under Uncontrolled Outdoor Illumination

    PubMed Central

    Jeon, Hong Y.; Tian, Lei F.; Zhu, Heping

    2011-01-01

    An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA). PMID:22163954

  18. Optimized feature-detection for on-board vision-based surveillance

    NASA Astrophysics Data System (ADS)

    Gond, Laetitia; Monnin, David; Schneider, Armin

    2012-06-01

    The detection and matching of robust features in images is an important step in many computer vision applications. In this paper, the importance of the keypoint detection algorithms and their inherent parameters in the particular context of an image-based change detection system for IED detection is studied. Through extensive application-oriented experiments, we draw an evaluation and comparison of the most popular feature detectors proposed by the computer vision community. We analyze how to automatically adjust these algorithms to changing imaging conditions and suggest improvements in order to achieve more exibility and robustness in their practical implementation.

  19. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  20. Algorithm Summary and Evaluation: Automatic Implementation of Ringdown Analysis for Electromechanical Mode Identification from Phasor Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.

    2010-02-28

    Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper develops a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detecting ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliablymore » and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis. In addition, the proposed method is applied to field measurement data from WECC to show the performance of the proposed algorithm.« less

  1. Road detection and buried object detection in elevated EO/IR imagery

    NASA Astrophysics Data System (ADS)

    Kennedy, Levi; Kolba, Mark P.; Walters, Joshua R.

    2012-06-01

    To assist the warfighter in visually identifying potentially dangerous roadside objects, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has developed an elevated video sensor system testbed for data collection. This system provides color and mid-wave infrared (MWIR) imagery. Signal Innovations Group (SIG) has developed an automated processing capability that detects the road within the sensor field of view and identifies potentially threatening buried objects within the detected road. The road detection algorithm leverages system metadata to project the collected imagery onto a flat ground plane, allowing for more accurate detection of the road as well as the direct specification of realistic physical constraints in the shape of the detected road. Once the road has been detected in an image frame, a buried object detection algorithm is applied to search for threatening objects within the detected road space. The buried object detection algorithm leverages textural and pixel intensity-based features to detect potential anomalies and then classifies them as threatening or non-threatening objects. Both the road detection and the buried object detection algorithms have been developed to facilitate their implementation in real-time in the NVESD system.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  3. Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection

    DTIC Science & Technology

    2008-05-01

    with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale

  4. Implementation in an FPGA circuit of Edge detection algorithm based on the Discrete Wavelet Transforms

    NASA Astrophysics Data System (ADS)

    Bouganssa, Issam; Sbihi, Mohamed; Zaim, Mounia

    2017-07-01

    The 2D Discrete Wavelet Transform (DWT) is a computationally intensive task that is usually implemented on specific architectures in many imaging systems in real time. In this paper, a high throughput edge or contour detection algorithm is proposed based on the discrete wavelet transform. A technique for applying the filters on the three directions (Horizontal, Vertical and Diagonal) of the image is used to present the maximum of the existing contours. The proposed architectures were designed in VHDL and mapped to a Xilinx Sparten6 FPGA. The results of the synthesis show that the proposed architecture has a low area cost and can operate up to 100 MHz, which can perform 2D wavelet analysis for a sequence of images while maintaining the flexibility of the system to support an adaptive algorithm.

  5. Effective channel estimation and efficient symbol detection for multi-input multi-output underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Ling, Jun

    Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes is verified by both computer simulations and experimental results obtained by analyzing the measurements acquired in multiple in-water experiments.

  6. A CCTV system with SMS alert (CMDSA): An implementation of pixel processing algorithm for motion detection

    NASA Astrophysics Data System (ADS)

    Rahman, Nurul Hidayah Ab; Abdullah, Nurul Azma; Hamid, Isredza Rahmi A.; Wen, Chuah Chai; Jelani, Mohamad Shafiqur Rahman Mohd

    2017-10-01

    Closed-Circuit TV (CCTV) system is one of the technologies in surveillance field to solve the problem of detection and monitoring by providing extra features such as email alert or motion detection. However, detecting and alerting the admin on CCTV system may complicate due to the complexity to integrate the main program with an external Application Programming Interface (API). In this study, pixel processing algorithm is applied due to its efficiency and SMS alert is added as an alternative solution for users who opted out email alert system or have no Internet connection. A CCTV system with SMS alert (CMDSA) was developed using evolutionary prototyping methodology. The system interface was implemented using Microsoft Visual Studio while the backend components, which are database and coding, were implemented on SQLite database and C# programming language, respectively. The main modules of CMDSA are motion detection, capturing and saving video, image processing and Short Message Service (SMS) alert functions. Subsequently, the system is able to reduce the processing time making the detection process become faster, reduce the space and memory used to run the program and alerting the system admin instantly.

  7. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    PubMed

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-05-28

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  8. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  9. Processing LiDAR Data to Predict Natural Hazards

    NASA Technical Reports Server (NTRS)

    Fairweather, Ian; Crabtree, Robert; Hager, Stacey

    2008-01-01

    ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.

  10. Using Machine Learning for Advanced Anomaly Detection and Classification

    NASA Astrophysics Data System (ADS)

    Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.

    2016-09-01

    Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.

  11. A novel track-before-detect algorithm based on optimal nonlinear filtering for detecting and tracking infrared dim target

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu

    2015-08-01

    Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.

  12. Robust Segmentation of Overlapping Cells in Histopathology Specimens Using Parallel Seed Detection and Repulsive Level Set

    PubMed Central

    Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin

    2013-01-01

    Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMA) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm which can reliably separate touching cells in hematoxylin stained breast TMA specimens which have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach which utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and tissue microarrays containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) which resulted in significant speed-up over the C/C++ implementation. PMID:22167559

  13. Detection with Enhanced Energy Windowing Phase I Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, David A.; Enders, Alexander L.

    2016-12-01

    This document reviews the progress of Phase I of the Detection with Enhanced Energy Windowing (DEEW) project. The DEEW project is the implementation of software incorporating an algorithm which reviews data generated by radiation portal monitors and utilizes advanced and novel techniques for detecting radiological and fissile material while not alarming on Naturally Occurring Radioactive Material. Independent testing indicated that the Enhanced Energy Windowing algorithm showed promise at reducing the probability of alarm in the stream of commerce compared to existing algorithms and other developmental algorithms, while still maintaining adequate sensitivity to threats. This document contains a brief description ofmore » the project, instructions for setting up and running the applications, and guidance to help make reviewing the output files and source code easier.« less

  14. A novel fast phase correlation algorithm for peak wavelength detection of Fiber Bragg Grating sensors.

    PubMed

    Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F

    2014-03-24

    Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.

  15. An application of artificial neural networks to experimental data approximation

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1993-01-01

    As an initial step in the evaluation of networks, a feedforward architecture is trained to approximate experimental data by the backpropagation algorithm. Several drawbacks were detected and an alternative learning algorithm was then developed to partially address the drawbacks. This noniterative algorithm has a number of advantages over the backpropagation method and is easily implemented on existing hardware.

  16. Real-time portable system for fabric defect detection using an ARM processor

    NASA Astrophysics Data System (ADS)

    Fernandez-Gallego, J. A.; Yañez-Puentes, J. P.; Ortiz-Jaramillo, B.; Alvarez, J.; Orjuela-Vargas, S. A.; Philips, W.

    2012-06-01

    Modern textile industry seeks to produce textiles as little defective as possible since the presence of defects can decrease the final price of products from 45% to 65%. Automated visual inspection (AVI) systems, based on image analysis, have become an important alternative for replacing traditional inspections methods that involve human tasks. An AVI system gives the advantage of repeatability when implemented within defined constrains, offering more objective and reliable results for particular tasks than human inspection. Costs of automated inspection systems development can be reduced using modular solutions with embedded systems, in which an important advantage is the low energy consumption. Among the possibilities for developing embedded systems, the ARM processor has been explored for acquisition, monitoring and simple signal processing tasks. In a recent approach we have explored the use of the ARM processor for defects detection by implementing the wavelet transform. However, the computation speed of the preprocessing was not yet sufficient for real time applications. In this approach we significantly improve the preprocessing speed of the algorithm, by optimizing matrix operations, such that it is adequate for a real time application. The system was tested for defect detection using different defect types. The paper is focused in giving a detailed description of the basis of the algorithm implementation, such that other algorithms may use of the ARM operations for fast implementations.

  17. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  18. Improvement in detection of small wildfires

    NASA Astrophysics Data System (ADS)

    Sleigh, William J.

    1991-12-01

    Detecting and imaging small wildfires with an Airborne Scanner is done against generally high background levels. The Airborne Scanner System used is a two-channel thermal IR scanner, with one channel selected for imaging the terrain and the other channel sensitive to hotter targets. If a relationship can be determined between the two channels that quantifies the background signal for hotter targets, then an algorithm can be determined that removes the background signal in that channel leaving only the fire signal. The relationship can be determined anywhere between various points in the signal processing of the radiometric data from the radiometric input to the quantized output of the system. As long as only linear operations are performed on the signal, the relationship will only depend on the system gain and offsets within the range of interest. The algorithm can be implemented either by using a look-up table or performing the calculation in the system computer. The current presentation will describe the algorithm, its derivation, and its implementation in the Firefly Wildfire Detection System by means of an off-the-shelf commercial scanner. Improvement over the previous algorithm used and the margin gained for improving the imaging of the terrain will be demonstrated.

  19. Improvement in detection of small wildfires

    NASA Technical Reports Server (NTRS)

    Sleigh, William J.

    1991-01-01

    Detecting and imaging small wildfires with an Airborne Scanner is done against generally high background levels. The Airborne Scanner System used is a two-channel thermal IR scanner, with one channel selected for imaging the terrain and the other channel sensitive to hotter targets. If a relationship can be determined between the two channels that quantifies the background signal for hotter targets, then an algorithm can be determined that removes the background signal in that channel leaving only the fire signal. The relationship can be determined anywhere between various points in the signal processing of the radiometric data from the radiometric input to the quantized output of the system. As long as only linear operations are performed on the signal, the relationship will only depend on the system gain and offsets within the range of interest. The algorithm can be implemented either by using a look-up table or performing the calculation in the system computer. The current presentation will describe the algorithm, its derivation, and its implementation in the Firefly Wildfire Detection System by means of an off-the-shelf commercial scanner. Improvement over the previous algorithm used and the margin gained for improving the imaging of the terrain will be demonstrated.

  20. On effectiveness of network sensor-based defense framework

    NASA Astrophysics Data System (ADS)

    Zhang, Difan; Zhang, Hanlin; Ge, Linqiang; Yu, Wei; Lu, Chao; Chen, Genshe; Pham, Khanh

    2012-06-01

    Cyber attacks are increasing in frequency, impact, and complexity, which demonstrate extensive network vulnerabilities with the potential for serious damage. Defending against cyber attacks calls for the distributed collaborative monitoring, detection, and mitigation. To this end, we develop a network sensor-based defense framework, with the aim of handling network security awareness, mitigation, and prediction. We implement the prototypical system and show its effectiveness on detecting known attacks, such as port-scanning and distributed denial-of-service (DDoS). Based on this framework, we also implement the statistical-based detection and sequential testing-based detection techniques and compare their respective detection performance. The future implementation of defensive algorithms can be provisioned in our proposed framework for combating cyber attacks.

  1. Detection of convective initiation using Meteosat SEVIRI: implementation in and verification with the tracking and nowcasting algorithm Cb-TRAM

    NASA Astrophysics Data System (ADS)

    Merk, D.; Zinner, T.

    2013-02-01

    In this paper a new detection scheme for Convective Initation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting Convective Initation with geostationary satellite data and uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one High Resolution Visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims for identifying the typical development of quickly developing convective cells in an early stage. The different criteria include timetrends of the 10.8 IR channel and IR channel differences as well as their timetrends. To provide the trend fields an optical flow based method is used, the Pyramidal Matching algorithm which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM and is verified for seven days which comprise different weather situations in Central Europe. Contrasted with the original early stage detection scheme of Cb-TRAM skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for synoptic conditions with upper cold air masses triggering convection.

  2. Detection of convective initiation using Meteosat SEVIRI: implementation in and verification with the tracking and nowcasting algorithm Cb-TRAM

    NASA Astrophysics Data System (ADS)

    Merk, D.; Zinner, T.

    2013-08-01

    In this paper a new detection scheme for convective initiation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting CI with geostationary satellite data. It uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared (IR) criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one high-resolution visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims to identify the typical development of quickly developing convective cells in an early stage. The different criteria include time trends of the 10.8 IR channel, and IR channel differences, as well as their time trends. To provide the trend fields an optical-flow-based method is used: the pyramidal matching algorithm, which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM, and is verified for seven days which comprise different weather situations in central Europe. Contrasted with the original early-stage detection scheme of Cb-TRAM, skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for the synoptic class "cold air" masses.

  3. Tracks detection from high-orbit space objects

    NASA Astrophysics Data System (ADS)

    Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.

    2017-05-01

    The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.

  4. Stereo-vision-based terrain mapping for off-road autonomous navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  5. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  6. Ship detection in panchromatic images: a new method and its DSP implementation

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Wang, Mengfei; Meng, Gang

    2016-03-01

    In this paper, a new ship detection method is proposed after analyzing the characteristics of panchromatic remote sensing images and ship targets. Firstly, AdaBoost(Adaptive Boosting) classifiers trained by Haar features are utilized to make coarse detection of ship targets. Then LSD (Line Segment Detector) is adopted to extract the line features in target slices to make fine detection. Experimental results on a dataset of panchromatic remote sensing images with a spatial resolution of 2m show that the proposed algorithm can achieve high detection rate and low false alarm rate. Meanwhile, the algorithm can meet the needs of practical applications on DSP (Digital Signal Processor).

  7. Implementation and testing of a sensor-netting algorithm for early warning and high confidence C/B threat detection

    NASA Astrophysics Data System (ADS)

    Gruber, Thomas; Grim, Larry; Fauth, Ryan; Tercha, Brian; Powell, Chris; Steinhardt, Kristin

    2011-05-01

    Large networks of disparate chemical/biological (C/B) sensors, MET sensors, and intelligence, surveillance, and reconnaissance (ISR) sensors reporting to various command/display locations can lead to conflicting threat information, questions of alarm confidence, and a confused situational awareness. Sensor netting algorithms (SNA) are being developed to resolve these conflicts and to report high confidence consensus threat map data products on a common operating picture (COP) display. A data fusion algorithm design was completed in a Phase I SBIR effort and development continues in the Phase II SBIR effort. The initial implementation and testing of the algorithm has produced some performance results. The algorithm accepts point and/or standoff sensor data, and event detection data (e.g., the location of an explosion) from various ISR sensors (e.g., acoustic, infrared cameras, etc.). These input data are preprocessed to assign estimated uncertainty to each incoming piece of data. The data are then sent to a weighted tomography process to obtain a consensus threat map, including estimated threat concentration level uncertainty. The threat map is then tested for consistency and the overall confidence for the map result is estimated. The map and confidence results are displayed on a COP. The benefits of a modular implementation of the algorithm and comparisons of fused / un-fused data results will be presented. The metrics for judging the sensor-netting algorithm performance are warning time, threat map accuracy (as compared to ground truth), false alarm rate, and false alarm rate v. reported threat confidence level.

  8. Algorithms for classification of astronomical object spectra

    NASA Astrophysics Data System (ADS)

    Wasiewicz, P.; Szuppe, J.; Hryniewicz, K.

    2015-09-01

    Obtaining interesting celestial objects from tens of thousands or even millions of recorded optical-ultraviolet spectra depends not only on the data quality but also on the accuracy of spectra decomposition. Additionally rapidly growing data volumes demands higher computing power and/or more efficient algorithms implementations. In this paper we speed up the process of substracting iron transitions and fitting Gaussian functions to emission peaks utilising C++ and OpenCL methods together with the NOSQL database. In this paper we implemented typical astronomical methods of detecting peaks in comparison to our previous hybrid methods implemented with CUDA.

  9. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  10. An intelligent architecture based on Field Programmable Gate Arrays designed to detect moving objects by using Principal Component Analysis.

    PubMed

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.

  11. Implementation Issues of Adaptive Energy Detection in Heterogeneous Wireless Networks

    PubMed Central

    Sobron, Iker; Eizmendi, Iñaki; Martins, Wallace A.; Diniz, Paulo S. R.; Ordiales, Juan Luis; Velez, Manuel

    2017-01-01

    Spectrum sensing (SS) enables the coexistence of non-coordinated heterogeneous wireless systems operating in the same band. Due to its computational simplicity, energy detection (ED) technique has been widespread employed in SS applications; nonetheless, the conventional ED may be unreliable under environmental impairments, justifying the use of ED-based variants. Assessing ED algorithms from theoretical and simulation viewpoints relies on several assumptions and simplifications which, eventually, lead to conclusions that do not necessarily meet the requirements imposed by real propagation environments. This work addresses those problems by dealing with practical implementation issues of adaptive least mean square (LMS)-based ED algorithms. The paper proposes a new adaptive ED algorithm that uses a variable step-size guaranteeing the LMS convergence in time-varying environments. Several implementation guidelines are provided and, additionally, an empirical assessment and validation with a software defined radio-based hardware is carried out. Experimental results show good performance in terms of probabilities of detection (Pd>0.9) and false alarm (Pf∼0.05) in a range of low signal-to-noise ratios around [-4,1] dB, in both single-node and cooperative modes. The proposed sensing methodology enables a seamless monitoring of the radio electromagnetic spectrum in order to provide band occupancy information for an efficient usage among several wireless communications systems. PMID:28441751

  12. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  13. A bioinspired collision detection algorithm for VLSI implementation

    NASA Astrophysics Data System (ADS)

    Cuadri, J.; Linan, G.; Stafford, R.; Keil, M. S.; Roca, E.

    2005-06-01

    In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.

  14. Integrated System Health Management (ISHM) for Test Stand and J-2X Engine: Core Implementation

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge F.; Schmalzel, John L.; Aguilar, Robert; Shwabacher, Mark; Morris, Jon

    2008-01-01

    ISHM capability enables a system to detect anomalies, determine causes and effects, predict future anomalies, and provides an integrated awareness of the health of the system to users (operators, customers, management, etc.). NASA Stennis Space Center, NASA Ames Research Center, and Pratt & Whitney Rocketdyne have implemented a core ISHM capability that encompasses the A1 Test Stand and the J-2X Engine. The implementation incorporates all aspects of ISHM; from anomaly detection (e.g. leaks) to root-cause-analysis based on failure mode and effects analysis (FMEA), to a user interface for an integrated visualization of the health of the system (Test Stand and Engine). The implementation provides a low functional capability level (FCL) in that it is populated with few algorithms and approaches for anomaly detection, and root-cause trees from a limited FMEA effort. However, it is a demonstration of a credible ISHM capability, and it is inherently designed for continuous and systematic augmentation of the capability. The ISHM capability is grounded on an integrating software environment used to create an ISHM model of the system. The ISHM model follows an object-oriented approach: includes all elements of the system (from schematics) and provides for compartmentalized storage of information associated with each element. For instance, a sensor object contains a transducer electronic data sheet (TEDS) with information that might be used by algorithms and approaches for anomaly detection, diagnostics, etc. Similarly, a component, such as a tank, contains a Component Electronic Data Sheet (CEDS). Each element also includes a Health Electronic Data Sheet (HEDS) that contains health-related information such as anomalies and health state. Some practical aspects of the implementation include: (1) near real-time data flow from the test stand data acquisition system through the ISHM model, for near real-time detection of anomalies and diagnostics, (2) insertion of the J-2X predictive model providing predicted sensor values for comparison with measured values and use in anomaly detection and diagnostics, and (3) insertion of third-party anomaly detection algorithms into the integrated ISHM model.

  15. UGS video target detection and discrimination

    NASA Astrophysics Data System (ADS)

    Roberts, G. Marlon; Fitzgerald, James; McCormack, Michael; Steadman, Robert; Vitale, Joseph D.

    2007-04-01

    This project focuses on developing electro-optic algorithms which rank images by their likelihood of containing vehicles and people. These algorithms have been applied to images obtained from Textron's Terrain Commander 2 (TC2) Unattended Ground Sensor system. The TC2 is a multi-sensor surveillance system used in military applications. It combines infrared, acoustic, seismic, magnetic, and electro-optic sensors to detect nearby targets. When targets are detected by the seismic and acoustic sensors, the system is triggered and images are taken in the visible and infrared spectrum. The original Terrain Commander system occasionally captured and transmitted an excessive number of images, sometimes triggered by undesirable targets such as swaying trees. This wasted communications bandwidth, increased power consumption, and resulted in a large amount of end-user time being spent evaluating unimportant images. The algorithms discussed here help alleviate these problems. These algorithms are currently optimized for infra-red images, which give the best visibility in a wide range of environments, but could be adapted to visible imagery as well. It is important that the algorithms be robust, with minimal dependency on user input. They should be effective when tracking varying numbers of targets of different sizes and orientations, despite the low resolutions of the images used. Most importantly, the algorithms must be appropriate for implementation on a low-power processor in real time. This would enable us to maintain frame rates of 2 Hz for effective surveillance operations. Throughout our project we have implemented several algorithms, and used an appropriate methodology to quantitatively compare their performance. They are discussed in this paper.

  16. Multi-site evaluation of a computer aided detection (CAD) algorithm for small acute intra-cranial hemorrhage and development of a stand-alone CAD system ready for deployment in a clinical environment

    NASA Astrophysics Data System (ADS)

    Deshpande, Ruchi R.; Fernandez, James; Lee, Joon K.; Chan, Tao; Liu, Brent J.; Huang, H. K.

    2010-03-01

    Timely detection of Acute Intra-cranial Hemorrhage (AIH) in an emergency environment is essential for the triage of patients suffering from Traumatic Brain Injury. Moreover, the small size of lesions and lack of experience on the reader's part could lead to difficulties in the detection of AIH. A CT based CAD algorithm for the detection of AIH has been developed in order to improve upon the current standard of identification and treatment of AIH. A retrospective analysis of the algorithm has already been carried out with 135 AIH CT studies with 135 matched normal head CT studies from the Los Angeles County General Hospital/ University of Southern California Hospital System (LAC/USC). In the next step, AIH studies have been collected from Walter Reed Army Medical Center, and are currently being processed using the AIH CAD system as part of implementing a multi-site assessment and evaluation of the performance of the algorithm. The sensitivity and specificity numbers from the Walter Reed study will be compared with the numbers from the LAC/USC study to determine if there are differences in the presentation and detection due to the difference in the nature of trauma between the two sites. Simultaneously, a stand-alone system with a user friendly GUI has been developed to facilitate implementation in a clinical setting.

  17. Spectral implementation of some quantum algorithms by one- and two-dimensional nuclear magnetic resonance

    NASA Astrophysics Data System (ADS)

    Das, Ranabir; Kumar, Anil

    2004-10-01

    Quantum information processing has been effectively demonstrated on a small number of qubits by nuclear magnetic resonance. An important subroutine in any computing is the readout of the output. "Spectral implementation" originally suggested by Z. L. Madi, R. Bruschweiler, and R. R. Ernst [J. Chem. Phys. 109, 10603 (1999)], provides an elegant method of readout with the use of an extra "observer" qubit. At the end of computation, detection of the observer qubit provides the output via the multiplet structure of its spectrum. In spectral implementation by two-dimensional experiment the observer qubit retains the memory of input state during computation, thereby providing correlated information on input and output, in the same spectrum. Spectral implementation of Grover's search algorithm, approximate quantum counting, a modified version of Berstein-Vazirani problem, and Hogg's algorithm are demonstrated here in three- and four-qubit systems.

  18. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

    PubMed Central

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-01-01

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382

  19. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  20. Towards the run and walk activity classification through step detection--an android application.

    PubMed

    Oner, Melis; Pulcifer-Stump, Jeffry A; Seeling, Patrick; Kaya, Tolga

    2012-01-01

    Falling is one of the most common accidents with potentially irreversible consequences, especially considering special groups, such as the elderly or disabled. One approach to solve this issue would be an early detection of the falling event. Towards reaching the goal of early fall detection, we have worked on distinguishing and monitoring some basic human activities such as walking and running. Since we plan to implement the system mostly for seniors and the disabled, simplicity of the usage becomes very important. We have successfully implemented an algorithm that would not require the acceleration sensor to be fixed in a specific position (the smart phone itself in our application), whereas most of the previous research dictates the sensor to be fixed in a certain direction. This algorithm reviews data from the accelerometer to determine if a user has taken a step or not and keeps track of the total amount of steps. After testing, the algorithm was more accurate than a commercial pedometer in terms of comparing outputs to the actual number of steps taken by the user.

  1. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less

  2. Testing and Validating Machine Learning Classifiers by Metamorphic Testing☆

    PubMed Central

    Xie, Xiaoyuan; Ho, Joshua W. K.; Murphy, Christian; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh

    2011-01-01

    Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no “test oracle” to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique “metamorphic testing”, which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program. PMID:21532969

  3. Integration of a Self-Coherence Algorithm into DISAT for Forced Oscillation Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follum, James D.; Tuffner, Francis K.; Amidan, Brett G.

    2015-03-03

    With the increasing number of phasor measurement units on the power system, behaviors typically not observable on the power system are becoming more apparent. Oscillatory behavior on the power system, notably forced oscillations, are one such behavior. However, the large amounts of data coming from the PMUs makes manually detecting and locating these oscillations difficult. To automate portions of the process, an oscillation detection routine was coded into the Data Integrity and Situational Awareness Tool (DISAT) framework. Integration into the DISAT framework allows forced oscillations to be detected and information about the event provided to operational engineers. The oscillation detectionmore » algorithm integrates with the data handling and atypical data detecting capabilities of DISAT, building off of a standard library of functions. This report details that integration with information on the algorithm, some implementation issues, and some sample results from the western United States’ power grid.« less

  4. eqMAXEL: A new automatic earthquake location algorithm implementation for Earthworm

    NASA Astrophysics Data System (ADS)

    Lisowski, S.; Friberg, P. A.; Sheen, D. H.

    2017-12-01

    A common problem with automated earthquake location systems for a local to regional scale seismic network is false triggering and false locations inside the network caused by larger regional to teleseismic distance earthquakes. This false location issue also presents a problem for earthquake early warning systems where societal impacts of false alarms can be very expensive. Towards solving this issue, Sheen et al. (2016) implemented a robust maximum-likelihood earthquake location algorithm known as MAXEL. It was shown with both synthetics and real-data for a small number of arrivals, that large regional events were easily identifiable through metrics in the MAXEL algorithm. In the summer of 2017, we collaboratively implemented the MAXEL algorithm into a fully functional Earthworm module and tested it in regions of the USA where false detections and alarming are observed. We show robust improvement in the ability of the Earthworm system to filter out regional and teleseismic events that would have falsely located inside the network using the traditional Earthworm hypoinverse solution. We also explore using different grid sizes in the implementation of the MAXEL algorithm, which was originally designed with South Korea as the target network size.

  5. Alarm systems detect volcanic tremor and earthquake swarms during Redoubt eruption, 2009

    NASA Astrophysics Data System (ADS)

    Thompson, G.; West, M. E.

    2009-12-01

    We ran two alarm algorithms on real-time data from Redoubt volcano during the 2009 crisis. The first algorithm was designed to detect escalations in continuous seismicity (tremor). This is implemented within an application called IceWeb which computes reduced displacement, and produces plots of reduced displacement and spectrograms linked to the Alaska Volcano Observatory internal webpage every 10 minutes. Reduced displacement is a measure of the amplitude of volcanic tremor, and is computed by applying a geometrical spreading correction to a displacement seismogram. When the reduced displacement at multiple stations exceeds pre-defined thresholds and there has been a factor of 3 increase in reduced displacement over the previous hour, a tremor alarm is declared. The second algorithm was to designed to detect earthquake swarms. The mean and median event rates are computed every 5 minutes based on the last hour of data from a real-time event catalog. By comparing these with thresholds, three swarm alarm conditions can be declared: a new swarm, an escalation in a swarm, and the end of a swarm. The end of swarm alarm is important as it may mark a transition from swarm to continuous tremor. Alarms from both systems were dispatched using a generic alarm management system which implements a call-down list, allowing observatory scientists to be called in sequence until someone acknowledged the alarm via a confirmation web page. The results of this simple approach are encouraging. The tremor alarm algorithm detected 26 of the 27 explosive eruptions that occurred from 23 March - 4 April. The swarm alarm algorithm detected all five of the main volcanic earthquake swarm episodes which occurred during the Redoubt crisis on 26-27 February, 21-23 March, 26 March, 2-4 April and 3-7 May. The end-of-swarm alarms on 23 March and 4 April were particularly helpful as they were caused by transitions from swarm to tremor shortly preceding explosive eruptions; transitions which were detected much earlier by the swarm algorithm than they were by the tremor algorithm.

  6. Algorithm for Automatic Segmentation of Nuclear Boundaries in Cancer Cells in Three-Channel Luminescent Images

    NASA Astrophysics Data System (ADS)

    Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.

    2015-09-01

    We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.

  7. Automatic cardiac cycle determination directly from EEG-fMRI data by multi-scale peak detection method.

    PubMed

    Wong, Chung-Ki; Luo, Qingfei; Zotev, Vadim; Phillips, Raquel; Chan, Kam Wai Clifford; Bodurka, Jerzy

    2018-03-31

    In simultaneous EEG-fMRI, identification of the period of cardioballistic artifact (BCG) in EEG is required for the artifact removal. Recording the electrocardiogram (ECG) waveform during fMRI is difficult, often causing inaccurate period detection. Since the waveform of the BCG extracted by independent component analysis (ICA) is relatively invariable compared to the ECG waveform, we propose a multiple-scale peak-detection algorithm to determine the BCG cycle directly from the EEG data. The algorithm first extracts the high contrast BCG component from the EEG data by ICA. The BCG cycle is then estimated by band-pass filtering the component around the fundamental frequency identified from its energy spectral density, and the peak of BCG artifact occurrence is selected from each of the estimated cycle. The algorithm is shown to achieve a high accuracy on a large EEG-fMRI dataset. It is also adaptive to various heart rates without the needs of adjusting the threshold parameters. The cycle detection remains accurate with the scan duration reduced to half a minute. Additionally, the algorithm gives a figure of merit to evaluate the reliability of the detection accuracy. The algorithm is shown to give a higher detection accuracy than the commonly used cycle detection algorithm fmrib_qrsdetect implemented in EEGLAB. The achieved high cycle detection accuracy of our algorithm without using the ECG waveforms makes possible to create and automate pipelines for processing large EEG-fMRI datasets, and virtually eliminates the need for ECG recordings for BCG artifact removal. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Aquatic Debris Detection Using Embedded Camera Sensors

    PubMed Central

    Wang, Yong; Wang, Dianhong; Lu, Qian; Luo, Dapeng; Fang, Wu

    2015-01-01

    Aquatic debris monitoring is of great importance to human health, aquatic habitats and water transport. In this paper, we first introduce the prototype of an aquatic sensor node equipped with an embedded camera sensor. Based on this sensing platform, we propose a fast and accurate debris detection algorithm. Our method is specifically designed based on compressive sensing theory to give full consideration to the unique challenges in aquatic environments, such as waves, swaying reflections, and tight energy budget. To upload debris images, we use an efficient sparse recovery algorithm in which only a few linear measurements need to be transmitted for image reconstruction. Besides, we implement the host software and test the debris detection algorithm on realistically deployed aquatic sensor nodes. The experimental results demonstrate that our approach is reliable and feasible for debris detection using camera sensors in aquatic environments. PMID:25647741

  9. Toward an Objective Enhanced-V Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Brunner, Jason; Feltz, Wayne; Moses, John; Rabin, Robert; Ackerman, Steven

    2007-01-01

    The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V' signature, has been observed to occur during and preceding severe weather in previous studies. This study describes an algorithmic approach to objectively detect enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of cross correlation statistics of pixels and thresholds of enhanced-V quantitative parameters. The effectiveness of the enhanced-V detection method will be examined using Geostationary Operational Environmental Satellite, MODerate-resolution Imaging Spectroradiometer, and Advanced Very High Resolution Radiometer image data from case studies in the 2003-2006 seasons. The main goal of this study is to develop an objective enhanced-V detection algorithm for future implementation into operations with future sensors, such as GOES-R.

  10. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    PubMed Central

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  11. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  12. Accurate derivation of heart rate variability signal for detection of sleep disordered breathing in children.

    PubMed

    Chatlapalli, S; Nazeran, H; Melarkod, V; Krishnam, R; Estrada, E; Pamula, Y; Cabrera, S

    2004-01-01

    The electrocardiogram (ECG) signal is used extensively as a low cost diagnostic tool to provide information concerning the heart's state of health. Accurate determination of the QRS complex, in particular, reliable detection of the R wave peak, is essential in computer based ECG analysis. ECG data from Physionet's Sleep-Apnea database were used to develop, test, and validate a robust heart rate variability (HRV) signal derivation algorithm. The HRV signal was derived from pre-processed ECG signals by developing an enhanced Hilbert transform (EHT) algorithm with built-in missing beat detection capability for reliable QRS detection. The performance of the EHT algorithm was then compared against that of a popular Hilbert transform-based (HT) QRS detection algorithm. Autoregressive (AR) modeling of the HRV power spectrum for both EHT- and HT-derived HRV signals was achieved and different parameters from their power spectra as well as approximate entropy were derived for comparison. Poincare plots were then used as a visualization tool to highlight the detection of the missing beats in the EHT method After validation of the EHT algorithm on ECG data from the Physionet, the algorithm was further tested and validated on a dataset obtained from children undergoing polysomnography for detection of sleep disordered breathing (SDB). Sensitive measures of accurate HRV signals were then derived to be used in detecting and diagnosing sleep disordered breathing in children. All signal processing algorithms were implemented in MATLAB. We present a description of the EHT algorithm and analyze pilot data for eight children undergoing nocturnal polysomnography. The pilot data demonstrated that the EHT method provides an accurate way of deriving the HRV signal and plays an important role in extraction of reliable measures to distinguish between periods of normal and sleep disordered breathing (SDB) in children.

  13. QRS detection based ECG quality assessment.

    PubMed

    Hayn, Dieter; Jammerbund, Bernhard; Schreier, Günter

    2012-09-01

    Although immediate feedback concerning ECG signal quality during recording is useful, up to now not much literature describing quality measures is available. We have implemented and evaluated four ECG quality measures. Empty lead criterion (A), spike detection criterion (B) and lead crossing point criterion (C) were calculated from basic signal properties. Measure D quantified the robustness of QRS detection when applied to the signal. An advanced Matlab-based algorithm combining all four measures and a simplified algorithm for Android platforms, excluding measure D, were developed. Both algorithms were evaluated by taking part in the Computing in Cardiology Challenge 2011. Each measure's accuracy and computing time was evaluated separately. During the challenge, the advanced algorithm correctly classified 93.3% of the ECGs in the training-set and 91.6 % in the test-set. Scores for the simplified algorithm were 0.834 in event 2 and 0.873 in event 3. Computing time for measure D was almost five times higher than for other measures. Required accuracy levels depend on the application and are related to computing time. While our simplified algorithm may be accurate for real-time feedback during ECG self-recordings, QRS detection based measures can further increase the performance if sufficient computing power is available.

  14. Sideband Algorithm for Automatic Wind Turbine Gearbox Fault Detection and Diagnosis: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zappala, D.; Tavner, P.; Crabtree, C.

    2013-01-01

    Improving the availability of wind turbines (WT) is critical to minimize the cost of wind energy, especially for offshore installations. As gearbox downtime has a significant impact on WT availabilities, the development of reliable and cost-effective gearbox condition monitoring systems (CMS) is of great concern to the wind industry. Timely detection and diagnosis of developing gear defects within a gearbox is an essential part of minimizing unplanned downtime of wind turbines. Monitoring signals from WT gearboxes are highly non-stationary as turbine load and speed vary continuously with time. Time-consuming and costly manual handling of large amounts of monitoring data representmore » one of the main limitations of most current CMSs, so automated algorithms are required. This paper presents a fault detection algorithm for incorporation into a commercial CMS for automatic gear fault detection and diagnosis. The algorithm allowed the assessment of gear fault severity by tracking progressive tooth gear damage during variable speed and load operating conditions of the test rig. Results show that the proposed technique proves efficient and reliable for detecting gear damage. Once implemented into WT CMSs, this algorithm can automate data interpretation reducing the quantity of information that WT operators must handle.« less

  15. Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data

    PubMed Central

    Munoz-Organero, Mario; Lotfi, Ahmad

    2016-01-01

    Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented. PMID:27618063

  16. Community Detection on the GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naim, Md; Manne, Fredrik; Halappanavar, Mahantesh

    We present and evaluate a new GPU algorithm based on the Louvain method for community detection. Our algorithm is the first for this problem that parallelizes the access to individual edges. In this way we can fine tune the load balance when processing networks with nodes of highly varying degrees. This is achieved by scaling the number of threads assigned to each node according to its degree. Extensive experiments show that we obtain speedups up to a factor of 270 compared to the sequential algorithm. The algorithm consistently outperforms other recent shared memory implementations and is only one order ofmore » magnitude slower than the current fastest parallel Louvain method running on a Blue Gene/Q supercomputer using more than 500K threads.« less

  17. Robust and unobtrusive algorithm based on position independence for step detection

    NASA Astrophysics Data System (ADS)

    Qiu, KeCheng; Li, MengYang; Luo, YiHan

    2018-04-01

    Running is becoming one of the most popular exercises among the people, monitoring steps can help users better understand their running process and improve exercise efficiency. In this paper, we design and implement a robust and unobtrusive algorithm based on position independence for step detection under real environment. It applies Butterworth filter to suppress high frequency interference and then employs the projection based on mathematics to transform system to solve the problem of unknown position of smartphone. Finally, using sliding window to suppress the false peak. The algorithm was tested for eight participants on the Android 7.0 platform. In our experiments, the results show that the proposed algorithm can achieve desired effect in spite of device pose.

  18. Alignment-free detection of horizontal gene transfer between closely related bacterial genomes.

    PubMed

    Domazet-Lošo, Mirjana; Haubold, Bernhard

    2011-09-01

    Bacterial epidemics are often caused by strains that have acquired their increased virulence through horizontal gene transfer. Due to this association with disease, the detection of horizontal gene transfer continues to receive attention from microbiologists and bioinformaticians alike. Most software for detecting transfer events is based on alignments of sets of genes or of entire genomes. But despite great advances in the design of algorithms and computer programs, genome alignment remains computationally challenging. We have therefore developed an alignment-free algorithm for rapidly detecting horizontal gene transfer between closely related bacterial genomes. Our implementation of this algorithm is called alfy for "ALignment Free local homologY" and is freely available from http://guanine.evolbio.mpg.de/alfy/. In this comment we demonstrate the application of alfy to the genomes of Staphylococcus aureus. We also argue that-contrary to popular belief and in spite of increasing computer speed-algorithmic optimization is becoming more, not less, important if genome data continues to accumulate at the present rate.

  19. The algorithm for automatic detection of the calibration object

    NASA Astrophysics Data System (ADS)

    Artem, Kruglov; Irina, Ugfeld

    2017-06-01

    The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.

  20. System for Anomaly and Failure Detection (SAFD) system development

    NASA Technical Reports Server (NTRS)

    Oreilly, D.

    1992-01-01

    This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.

  1. Face detection assisted auto exposure: supporting evidence from a psychophysical study

    NASA Astrophysics Data System (ADS)

    Jin, Elaine W.; Lin, Sheng; Dharumalingam, Dhandapani

    2010-01-01

    Face detection has been implemented in many digital still cameras and camera phones with the promise of enhancing existing camera functions (e.g. auto exposure) and adding new features to cameras (e.g. blink detection). In this study we examined the use of face detection algorithms in assisting auto exposure (AE). The set of 706 images, used in this study, was captured using Canon Digital Single Lens Reflex cameras and subsequently processed with an image processing pipeline. A psychophysical study was performed to obtain optimal exposure along with the upper and lower bounds of exposure for all 706 images. Three methods of marking faces were utilized: manual marking, face detection algorithm A (FD-A), and face detection algorithm B (FD-B). The manual marking method found 751 faces in 426 images, which served as the ground-truth for face regions of interest. The remaining images do not have any faces or the faces are too small to be considered detectable. The two face detection algorithms are different in resource requirements and in performance. FD-A uses less memory and gate counts compared to FD-B, but FD-B detects more faces and has less false positives. A face detection assisted auto exposure algorithm was developed and tested against the evaluation results from the psychophysical study. The AE test results showed noticeable improvement when faces were detected and used in auto exposure. However, the presence of false positives would negatively impact the added benefit.

  2. Bio-ALIRT biosurveillance detection algorithm evaluation.

    PubMed

    Siegrist, David; Pavlin, J

    2004-09-24

    Early detection of disease outbreaks by a medical biosurveillance system relies on two major components: 1) the contribution of early and reliable data sources and 2) the sensitivity, specificity, and timeliness of biosurveillance detection algorithms. This paper describes an effort to assess leading detection algorithms by arranging a common challenge problem and providing a common data set. The objectives of this study were to determine whether automated detection algorithms can reliably and quickly identify the onset of natural disease outbreaks that are surrogates for possible terrorist pathogen releases, and do so at acceptable false-alert rates (e.g., once every 2-6 weeks). Historic de-identified data were obtained from five metropolitan areas over 23 months; these data included International Classification of Diseases, Ninth Revision (ICD-9) codes related to respiratory and gastrointestinal illness syndromes. An outbreak detection group identified and labeled two natural disease outbreaks in these data and provided them to analysts for training of detection algorithms. All outbreaks in the remaining test data were identified but not revealed to the detection groups until after their analyses. The algorithms established a probability of outbreak for each day's counts. The probability of outbreak was assessed as an "actual" alert for different false-alert rates. The best algorithms were able to detect all of the outbreaks at false-alert rates of one every 2-6 weeks. They were often able to detect for the same day human investigators had identified as the true start of the outbreak. Because minimal data exists for an actual biologic attack, determining how quickly an algorithm might detect such an attack is difficult. However, application of these algorithms in combination with other data-analysis methods to historic outbreak data indicates that biosurveillance techniques for analyzing syndrome counts can rapidly detect seasonal respiratory and gastrointestinal illness outbreaks. Further research is needed to assess the value of electronic data sources for predictive detection. In addition, simulations need to be developed and implemented to better characterize the size and type of biologic attack that can be detected by current methods by challenging them under different projected operational conditions.

  3. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  4. Digitally balanced detection for optical tomography.

    PubMed

    Hafiz, Rehan; Ozanyan, Krikor B

    2007-10-01

    Analog balanced Photodetection has found extensive usage for sensing of a weak absorption signal buried in laser intensity noise. This paper proposes schemes for compact, affordable, and flexible digital implementation of the already established analog balanced detection, as part of a multichannel digital tomography system. Variants of digitally balanced detection (DBD) schemes, suitable for weak signals on a largely varying background or weakly varying envelopes of high frequency carrier waves, are introduced analytically and elaborated in terms of algorithmic and hardware flow. The DBD algorithms are implemented on a low-cost general purpose reconfigurable hardware (field-programmable gate array), utilizing less than half of its resources. The performance of the DBD schemes compare favorably with their analog counterpart: A common mode rejection ratio of 50 dB was observed over a bandwidth of 300 kHz, limited mainly by the host digital hardware. The close relationship between the DBD outputs and those of known analog balancing circuits is discussed in principle and shown experimentally in the example case of propane gas detection.

  5. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  6. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms

    PubMed Central

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831

  7. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.

    PubMed

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.

  8. Adaptive thresholding with inverted triangular area for real-time detection of the heart rate from photoplethysmogram traces on a smartphone.

    PubMed

    Jiang, Wen Jun; Wittek, Peter; Zhao, Li; Gao, Shi Chao

    2014-01-01

    Photoplethysmogram (PPG) signals acquired by smartphone cameras are weaker than those acquired by dedicated pulse oximeters. Furthermore, the signals have lower sampling rates, have notches in the waveform and are more severely affected by baseline drift, leading to specific morphological characteristics. This paper introduces a new feature, the inverted triangular area, to address these specific characteristics. The new feature enables real-time adaptive waveform detection using an algorithm of linear time complexity. It can also recognize notches in the waveform and it is inherently robust to baseline drift. An implementation of the algorithm on Android is available for free download. We collected data from 24 volunteers and compared our algorithm in peak detection with two competing algorithms designed for PPG signals, Incremental-Merge Segmentation (IMS) and Adaptive Thresholding (ADT). A sensitivity of 98.0% and a positive predictive value of 98.8% were obtained, which were 7.7% higher than the IMS algorithm in sensitivity, and 8.3% higher than the ADT algorithm in positive predictive value. The experimental results confirmed the applicability of the proposed method.

  9. A targeted change-detection procedure by combining change vector analysis and post-classification approach

    NASA Astrophysics Data System (ADS)

    Ye, Su; Chen, Dongmei; Yu, Jie

    2016-04-01

    In remote sensing, conventional supervised change-detection methods usually require effective training data for multiple change types. This paper introduces a more flexible and efficient procedure that seeks to identify only the changes that users are interested in, here after referred to as "targeted change detection". Based on a one-class classifier "Support Vector Domain Description (SVDD)", a novel algorithm named "Three-layer SVDD Fusion (TLSF)" is developed specially for targeted change detection. The proposed algorithm combines one-class classification generated from change vector maps, as well as before- and after-change images in order to get a more reliable detecting result. In addition, this paper introduces a detailed workflow for implementing this algorithm. This workflow has been applied to two case studies with different practical monitoring objectives: urban expansion and forest fire assessment. The experiment results of these two case studies show that the overall accuracy of our proposed algorithm is superior (Kappa statistics are 86.3% and 87.8% for Case 1 and 2, respectively), compared to applying SVDD to change vector analysis and post-classification comparison.

  10. LP-LPA: A link influence-based label propagation algorithm for discovering community structures in networks

    NASA Astrophysics Data System (ADS)

    Berahmand, Kamal; Bouyer, Asgarali

    2018-03-01

    Community detection is an essential approach for analyzing the structural and functional properties of complex networks. Although many community detection algorithms have been recently presented, most of them are weak and limited in different ways. Label Propagation Algorithm (LPA) is a well-known and efficient community detection technique which is characterized by the merits of nearly-linear running time and easy implementation. However, LPA has some significant problems such as instability, randomness, and monster community detection. In this paper, an algorithm, namely node’s label influence policy for label propagation algorithm (LP-LPA) was proposed for detecting efficient community structures. LP-LPA measures link strength value for edges and nodes’ label influence value for nodes in a new label propagation strategy with preference on link strength and for initial nodes selection, avoid of random behavior in tiebreak states, and efficient updating order and rule update. These procedures can sort out the randomness issue in an original LPA and stabilize the discovered communities in all runs of the same network. Experiments on synthetic networks and a wide range of real-world social networks indicated that the proposed method achieves significant accuracy and high stability. Indeed, it can obviously solve monster community problem with regard to detecting communities in networks.

  11. Automatic Whistler Detector and Analyzer system: Implementation of the analyzer algorithm

    NASA Astrophysics Data System (ADS)

    Lichtenberger, JáNos; Ferencz, Csaba; Hamar, Daniel; Steinbach, Peter; Rodger, Craig J.; Clilverd, Mark A.; Collier, Andrew B.

    2010-12-01

    The full potential of whistlers for monitoring plasmaspheric electron density variations has not yet been realized. The primary reason is the vast human effort required for the analysis of whistler traces. Recently, the first part of a complete whistler analysis procedure was successfully automated, i.e., the automatic detection of whistler traces from the raw broadband VLF signal was achieved. This study describes a new algorithm developed to determine plasmaspheric electron density measurements from whistler traces, based on a Virtual (Whistler) Trace Transformation, using a 2-D fast Fourier transform transformation. This algorithm can be automated and can thus form the final step to complete an Automatic Whistler Detector and Analyzer (AWDA) system. In this second AWDA paper, the practical implementation of the Automatic Whistler Analyzer (AWA) algorithm is discussed and a feasible solution is presented. The practical implementation of the algorithm is able to track the variations of plasmasphere in quasi real time on a PC cluster with 100 CPU cores. The electron densities obtained by the AWA method can be used in investigations such as plasmasphere dynamics, ionosphere-plasmasphere coupling, or in space weather models.

  12. Improving serum calcium test ordering according to a decision algorithm.

    PubMed

    Faria, Daniel K; Taniguchi, Leandro U; Fonseca, Luiz A M; Ferreira-Junior, Mario; Aguiar, Francisco J B; Lichtenstein, Arnaldo; Sumita, Nairo M; Duarte, Alberto J S; Sales, Maria M

    2018-05-18

    To detect differences in the pattern of serum calcium tests ordering before and after the implementation of a decision algorithm. We studied patients admitted to an internal medicine ward of a university hospital on April 2013 and April 2016. Patients were classified as critical or non-critical on the day when each test was performed. Adequacy of ordering was defined according to adherence to a decision algorithm implemented in 2014. Total and ionised calcium tests per patient-day of hospitalisation significantly decreased after the algorithm implementation; and duplication of tests (total and ionised calcium measured in the same blood sample) was reduced by 49%. Overall adequacy of ionised calcium determinations increased by 23% (P=0.0001) due to the increase in the adequacy of ionised calcium ordering in non-critical conditions. A decision algorithm can be a useful educational tool to improve adequacy of the process of ordering serum calcium tests. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. From Pacemaker to Wearable: Techniques for ECG Detection Systems.

    PubMed

    Kumar, Ashish; Komaragiri, Rama; Kumar, Manjeet

    2018-01-11

    With the alarming rise in the deaths due to cardiovascular diseases (CVD), present medical research scenario places notable importance on techniques and methods to detect CVDs. As adduced by world health organization, technological proceeds in the field of cardiac function assessment have become the nucleus and heart of all leading research studies in CVDs in which electrocardiogram (ECG) analysis is the most functional and convenient tool used to test the range of heart-related irregularities. Most of the approaches present in the literature of ECG signal analysis consider noise removal, rhythm-based analysis, and heartbeat detection to improve the performance of a cardiac pacemaker. Advancements achieved in the field of ECG segments detection and beat classification have a limited evaluation and still require clinical approvals. In this paper, approaches on techniques to implement on-chip ECG detector for a cardiac pacemaker system are discussed. Moreover, different challenges regarding the ECG signal morphology analysis deriving from medical literature is extensively reviewed. It is found that robustness to noise, wavelet parameter choice, numerical efficiency, and detection performance are essential performance indicators required by a state-of-the-art ECG detector. Furthermore, many algorithms described in the existing literature are not verified using ECG data from the standard databases. Some ECG detection algorithms show very high detection performance with the total number of detected QRS complexes. However, the high detection performance of the algorithm is verified using only a few datasets. Finally, gaps in current advancements and testing are identified, and the primary challenge remains to be implementing bullseye test for morphology analysis evaluation.

  14. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    PubMed

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  15. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    PubMed Central

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812

  16. Stride search: A general algorithm for storm detection in high-resolution climate data

    DOE PAGES

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; ...

    2016-04-13

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less

  17. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    PubMed

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  18. Vision-based vehicle detection and tracking algorithm design

    NASA Astrophysics Data System (ADS)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  19. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    NASA Astrophysics Data System (ADS)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  20. An algorithm to detect and communicate the differences in computational models describing biological systems.

    PubMed

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-02-15

    Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.

  1. Discovering biclusters in gene expression data based on high-dimensional linear geometries

    PubMed Central

    Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong

    2008-01-01

    Background In DNA microarray experiments, discovering groups of genes that share similar transcriptional characteristics is instrumental in functional annotation, tissue classification and motif identification. However, in many situations a subset of genes only exhibits consistent pattern over a subset of conditions. Conventional clustering algorithms that deal with the entire row or column in an expression matrix would therefore fail to detect these useful patterns in the data. Recently, biclustering has been proposed to detect a subset of genes exhibiting consistent pattern over a subset of conditions. However, most existing biclustering algorithms are based on searching for sub-matrices within a data matrix by optimizing certain heuristically defined merit functions. Moreover, most of these algorithms can only detect a restricted set of bicluster patterns. Results In this paper, we present a novel geometric perspective for the biclustering problem. The biclustering process is interpreted as the detection of linear geometries in a high dimensional data space. Such a new perspective views biclusters with different patterns as hyperplanes in a high dimensional space, and allows us to handle different types of linear patterns simultaneously by matching a specific set of linear geometries. This geometric viewpoint also inspires us to propose a generic bicluster pattern, i.e. the linear coherent model that unifies the seemingly incompatible additive and multiplicative bicluster models. As a particular realization of our framework, we have implemented a Hough transform-based hyperplane detection algorithm. The experimental results on human lymphoma gene expression dataset show that our algorithm can find biologically significant subsets of genes. Conclusion We have proposed a novel geometric interpretation of the biclustering problem. We have shown that many common types of bicluster are just different spatial arrangements of hyperplanes in a high dimensional data space. An implementation of the geometric framework using the Fast Hough transform for hyperplane detection can be used to discover biologically significant subsets of genes under subsets of conditions for microarray data analysis. PMID:18433477

  2. Syndromic Surveillance Using Veterinary Laboratory Data: Algorithm Combination and Customization of Alerts

    PubMed Central

    Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Sanchez, Javier; Revie, Crawford W.

    2013-01-01

    Background Syndromic surveillance research has focused on two main themes: the search for data sources that can provide early disease detection; and the development of efficient algorithms that can detect potential outbreak signals. Methods This work combines three algorithms that have demonstrated solid performance in detecting simulated outbreak signals of varying shapes in time series of laboratory submissions counts. These are: the Shewhart control charts designed to detect sudden spikes in counts; the EWMA control charts developed to detect slow increasing outbreaks; and the Holt-Winters exponential smoothing, which can explicitly account for temporal effects in the data stream monitored. A scoring system to detect and report alarms using these algorithms in a complementary way is proposed. Results The use of multiple algorithms in parallel resulted in increased system sensitivity. Specificity was decreased in simulated data, but the number of false alarms per year when the approach was applied to real data was considered manageable (between 1 and 3 per year for each of ten syndromic groups monitored). The automated implementation of this approach, including a method for on-line filtering of potential outbreak signals is described. Conclusion The developed system provides high sensitivity for detection of potential outbreak signals while also providing robustness and flexibility in establishing what signals constitute an alarm. This flexibility allows an analyst to customize the system for different syndromes. PMID:24349216

  3. Syndromic surveillance using veterinary laboratory data: algorithm combination and customization of alerts.

    PubMed

    Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Sanchez, Javier; Revie, Crawford W

    2013-01-01

    Syndromic surveillance research has focused on two main themes: the search for data sources that can provide early disease detection; and the development of efficient algorithms that can detect potential outbreak signals. This work combines three algorithms that have demonstrated solid performance in detecting simulated outbreak signals of varying shapes in time series of laboratory submissions counts. These are: the Shewhart control charts designed to detect sudden spikes in counts; the EWMA control charts developed to detect slow increasing outbreaks; and the Holt-Winters exponential smoothing, which can explicitly account for temporal effects in the data stream monitored. A scoring system to detect and report alarms using these algorithms in a complementary way is proposed. The use of multiple algorithms in parallel resulted in increased system sensitivity. Specificity was decreased in simulated data, but the number of false alarms per year when the approach was applied to real data was considered manageable (between 1 and 3 per year for each of ten syndromic groups monitored). The automated implementation of this approach, including a method for on-line filtering of potential outbreak signals is described. The developed system provides high sensitivity for detection of potential outbreak signals while also providing robustness and flexibility in establishing what signals constitute an alarm. This flexibility allows an analyst to customize the system for different syndromes.

  4. Development of an algorithm for an EEG-based driver fatigue countermeasure.

    PubMed

    Lal, Saroj K L; Craig, Ashley; Boord, Peter; Kirkup, Les; Nguyen, Hung

    2003-01-01

    Fatigue affects a driver's ability to proceed safely. Driver-related fatigue and/or sleepiness are a significant cause of traffic accidents, which makes this an area of great socioeconomic concern. Monitoring physiological signals while driving provides the possibility of detecting and warning of fatigue. The aim of this paper is to describe an EEG-based fatigue countermeasure algorithm and to report its reliability. Changes in all major EEG bands during fatigue were used to develop the algorithm for detecting different levels of fatigue. The software was shown to be capable of detecting fatigue accurately in 10 subjects tested. The percentage of time the subjects were detected to be in a fatigue state was significantly different than the alert phase (P<.01). This is the first countermeasure software described that has shown to detect fatigue based on EEG changes in all frequency bands. Field research is required to evaluate the fatigue software in order to produce a robust and reliable fatigue countermeasure system. The development of the fatigue countermeasure algorithm forms the basis of a future fatigue countermeasure device. Implementation of electronic devices for fatigue detection is crucial for reducing fatigue-related road accidents and their associated costs.

  5. Parallel Monte Carlo Search for Hough Transform

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.

    2017-10-01

    We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.

  6. Verification of Numerical Programs: From Real Numbers to Floating Point Numbers

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn E.; Munoz, Cesar; Kirchner, Florent; Correnson, Loiec

    2013-01-01

    Numerical algorithms lie at the heart of many safety-critical aerospace systems. The complexity and hybrid nature of these systems often requires the use of interactive theorem provers to verify that these algorithms are logically correct. Usually, proofs involving numerical computations are conducted in the infinitely precise realm of the field of real numbers. However, numerical computations in these algorithms are often implemented using floating point numbers. The use of a finite representation of real numbers introduces uncertainties as to whether the properties veri ed in the theoretical setting hold in practice. This short paper describes work in progress aimed at addressing these concerns. Given a formally proven algorithm, written in the Program Verification System (PVS), the Frama-C suite of tools is used to identify sufficient conditions and verify that under such conditions the rounding errors arising in a C implementation of the algorithm do not affect its correctness. The technique is illustrated using an algorithm for detecting loss of separation among aircraft.

  7. Connectivity algorithm with depth first search (DFS) on simple graphs

    NASA Astrophysics Data System (ADS)

    Riansanti, O.; Ihsan, M.; Suhaimi, D.

    2018-01-01

    This paper discusses an algorithm to detect connectivity of a simple graph using Depth First Search (DFS). The DFS implementation in this paper differs than other research, that is, on counting the number of visited vertices. The algorithm obtains s from the number of vertices and visits source vertex, following by its adjacent vertices until the last vertex adjacent to the previous source vertex. Any simple graph is connected if s equals 0 and disconnected if s is greater than 0. The complexity of the algorithm is O(n2).

  8. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    PubMed

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  9. Correlation coefficient based supervised locally linear embedding for pulmonary nodule recognition.

    PubMed

    Wu, Panpan; Xia, Kewen; Yu, Hengyong

    2016-11-01

    Dimensionality reduction techniques are developed to suppress the negative effects of high dimensional feature space of lung CT images on classification performance in computer aided detection (CAD) systems for pulmonary nodule detection. An improved supervised locally linear embedding (SLLE) algorithm is proposed based on the concept of correlation coefficient. The Spearman's rank correlation coefficient is introduced to adjust the distance metric in the SLLE algorithm to ensure that more suitable neighborhood points could be identified, and thus to enhance the discriminating power of embedded data. The proposed Spearman's rank correlation coefficient based SLLE (SC(2)SLLE) is implemented and validated in our pilot CAD system using a clinical dataset collected from the publicly available lung image database consortium and image database resource initiative (LICD-IDRI). Particularly, a representative CAD system for solitary pulmonary nodule detection is designed and implemented. After a sequential medical image processing steps, 64 nodules and 140 non-nodules are extracted, and 34 representative features are calculated. The SC(2)SLLE, as well as SLLE and LLE algorithm, are applied to reduce the dimensionality. Several quantitative measurements are also used to evaluate and compare the performances. Using a 5-fold cross-validation methodology, the proposed algorithm achieves 87.65% accuracy, 79.23% sensitivity, 91.43% specificity, and 8.57% false positive rate, on average. Experimental results indicate that the proposed algorithm outperforms the original locally linear embedding and SLLE coupled with the support vector machine (SVM) classifier. Based on the preliminary results from a limited number of nodules in our dataset, this study demonstrates the great potential to improve the performance of a CAD system for nodule detection using the proposed SC(2)SLLE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Implementation of an algorithm for cylindrical object identification using range data

    NASA Technical Reports Server (NTRS)

    Bozeman, Sylvia T.; Martin, Benjamin J.

    1989-01-01

    One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.

  11. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blom, Philip Stephen; Marcillo, Omar Eduardo; Euler, Garrett Gene

    InfraPy is a Python-based analysis toolkit being development at LANL. The algorithms are intended for ground-based nuclear detonation detection applications to detect, locate, and characterize explosive sources using infrasonic observations. The implementation is usable as a stand-alone Python library or as a command line driven tool operating directly on a database. With multiple scientists working on the project, we've begun using a LANL git repository for collaborative development and version control. Current and planned work on InfraPy focuses on the development of new algorithms and propagation models. Collaboration with Southern Methodist University (SMU) has helped identify bugs and limitations ofmore » the algorithms. The current focus of usage development is focused on library imports and CLI.« less

  13. A Systolic Array-Based FPGA Parallel Architecture for the BLAST Algorithm

    PubMed Central

    Guo, Xinyu; Wang, Hong; Devabhaktuni, Vijay

    2012-01-01

    A design of systolic array-based Field Programmable Gate Array (FPGA) parallel architecture for Basic Local Alignment Search Tool (BLAST) Algorithm is proposed. BLAST is a heuristic biological sequence alignment algorithm which has been used by bioinformatics experts. In contrast to other designs that detect at most one hit in one-clock-cycle, our design applies a Multiple Hits Detection Module which is a pipelining systolic array to search multiple hits in a single-clock-cycle. Further, we designed a Hits Combination Block which combines overlapping hits from systolic array into one hit. These implementations completed the first and second step of BLAST architecture and achieved significant speedup comparing with previously published architectures. PMID:25969747

  14. Improved peak detection in mass spectrum by incorporating continuous wavelet transform-based pattern matching.

    PubMed

    Du, Pan; Kibbe, Warren A; Lin, Simon M

    2006-09-01

    A major problem for current peak detection algorithms is that noise in mass spectrometry (MS) spectra gives rise to a high rate of false positives. The false positive rate is especially problematic in detecting peaks with low amplitudes. Usually, various baseline correction algorithms and smoothing methods are applied before attempting peak detection. This approach is very sensitive to the amount of smoothing and aggressiveness of the baseline correction, which contribute to making peak detection results inconsistent between runs, instrumentation and analysis methods. Most peak detection algorithms simply identify peaks based on amplitude, ignoring the additional information present in the shape of the peaks in a spectrum. In our experience, 'true' peaks have characteristic shapes, and providing a shape-matching function that provides a 'goodness of fit' coefficient should provide a more robust peak identification method. Based on these observations, a continuous wavelet transform (CWT)-based peak detection algorithm has been devised that identifies peaks with different scales and amplitudes. By transforming the spectrum into wavelet space, the pattern-matching problem is simplified and in addition provides a powerful technique for identifying and separating the signal from the spike noise and colored noise. This transformation, with the additional information provided by the 2D CWT coefficients can greatly enhance the effective signal-to-noise ratio. Furthermore, with this technique no baseline removal or peak smoothing preprocessing steps are required before peak detection, and this improves the robustness of peak detection under a variety of conditions. The algorithm was evaluated with SELDI-TOF spectra with known polypeptide positions. Comparisons with two other popular algorithms were performed. The results show the CWT-based algorithm can identify both strong and weak peaks while keeping false positive rate low. The algorithm is implemented in R and will be included as an open source module in the Bioconductor project.

  15. A novel spatter detection algorithm based on typical cellular neural network operations for laser beam welding processes

    NASA Astrophysics Data System (ADS)

    Nicolosi, L.; Abt, F.; Blug, A.; Heider, A.; Tetzlaff, R.; Höfler, H.

    2012-01-01

    Real-time monitoring of laser beam welding (LBW) has increasingly gained importance in several manufacturing processes ranging from automobile production to precision mechanics. In the latter, a novel algorithm for the real-time detection of spatters was implemented in a camera based on cellular neural networks. The latter can be connected to the optics of commercially available laser machines leading to real-time monitoring of LBW processes at rates up to 15 kHz. Such high monitoring rates allow the integration of other image evaluation tasks such as the detection of the full penetration hole for real-time control of process parameters.

  16. A Novel Segment-Based Approach for Improving Classification Performance of Transport Mode Detection.

    PubMed

    Guvensan, M Amac; Dusun, Burak; Can, Baris; Turkmen, H Irem

    2017-12-30

    Transportation planning and solutions have an enormous impact on city life. To minimize the transport duration, urban planners should understand and elaborate the mobility of a city. Thus, researchers look toward monitoring people's daily activities including transportation types and duration by taking advantage of individual's smartphones. This paper introduces a novel segment-based transport mode detection architecture in order to improve the results of traditional classification algorithms in the literature. The proposed post-processing algorithm, namely the Healing algorithm, aims to correct the misclassification results of machine learning-based solutions. Our real-life test results show that the Healing algorithm could achieve up to 40% improvement of the classification results. As a result, the implemented mobile application could predict eight classes including stationary, walking, car, bus, tram, train, metro and ferry with a success rate of 95% thanks to the proposed multi-tier architecture and Healing algorithm.

  17. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  18. Aircraft control surface failure detection and isolation using the OSGLR test. [orthogonal series generalized likelihood ratio

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.

    1986-01-01

    The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.

  19. Unobtrusive Multi-Static Serial LiDAR Imager (UMSLI) First Generation Shape-Matching Based Classifier for 2D Contours

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Zheng; Ouyang, Bing; Principe, Jose

    A multi-static serial LiDAR system prototype was developed under DE-EE0006787 to detect, classify, and record interactions of marine life with marine hydrokinetic generation equipment. This software implements a shape-matching based classifier algorithm for the underwater automated detection of marine life for that system. In addition to applying shape descriptors, the algorithm also adopts information theoretical learning based affine shape registration, improving point correspondences found by shape descriptors as well as the final similarity measure.

  20. Application of the Trend Filtering Algorithm for Photometric Time Series Data

    NASA Astrophysics Data System (ADS)

    Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.

    2016-08-01

    Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.

  1. High reliability - low noise radionuclide signature identification algorithms for border security applications

    NASA Astrophysics Data System (ADS)

    Lee, Sangkyu

    Illicit trafficking and smuggling of radioactive materials and special nuclear materials (SNM) are considered as one of the most important recent global nuclear threats. Monitoring the transport and safety of radioisotopes and SNM are challenging due to their weak signals and easy shielding. Great efforts worldwide are focused at developing and improving the detection technologies and algorithms, for accurate and reliable detection of radioisotopes of interest in thus better securing the borders against nuclear threats. In general, radiation portal monitors enable detection of gamma and neutron emitting radioisotopes. Passive or active interrogation techniques, present and/or under the development, are all aimed at increasing accuracy, reliability, and in shortening the time of interrogation as well as the cost of the equipment. Equally important efforts are aimed at advancing algorithms to process the imaging data in an efficient manner providing reliable "readings" of the interiors of the examined volumes of various sizes, ranging from cargos to suitcases. The main objective of this thesis is to develop two synergistic algorithms with the goal to provide highly reliable - low noise identification of radioisotope signatures. These algorithms combine analysis of passive radioactive detection technique with active interrogation imaging techniques such as gamma radiography or muon tomography. One algorithm consists of gamma spectroscopy and cosmic muon tomography, and the other algorithm is based on gamma spectroscopy and gamma radiography. The purpose of fusing two detection methodologies per algorithm is to find both heavy-Z radioisotopes and shielding materials, since radionuclides can be identified with gamma spectroscopy, and shielding materials can be detected using muon tomography or gamma radiography. These combined algorithms are created and analyzed based on numerically generated images of various cargo sizes and materials. In summary, the three detection methodologies are fused into two algorithms with mathematical functions providing: reliable identification of radioisotopes in gamma spectroscopy; noise reduction and precision enhancement in muon tomography; and the atomic number and density estimation in gamma radiography. It is expected that these new algorithms maybe implemented at portal scanning systems with the goal to enhance the accuracy and reliability in detecting nuclear materials inside the cargo containers.

  2. Selecting Power-Efficient Signal Features for a Low-Power Fall Detector.

    PubMed

    Wang, Changhong; Redmond, Stephen J; Lu, Wei; Stevens, Michael C; Lord, Stephen R; Lovell, Nigel H

    2017-11-01

    Falls are a serious threat to the health of older people. A wearable fall detector can automatically detect the occurrence of a fall and alert a caregiver or an emergency response service so they may deliver immediate assistance, improving the chances of recovering from fall-related injuries. One constraint of such a wearable technology is its limited battery life. Thus, minimization of power consumption is an important design concern, all the while maintaining satisfactory accuracy of the fall detection algorithms implemented on the wearable device. This paper proposes an approach for selecting power-efficient signal features such that the minimum desirable fall detection accuracy is assured. Using data collected in simulated falls, simulated activities of daily living, and real free-living trials, all using young volunteers, the proposed approach selects four features from a set of ten commonly used features, providing a power saving of 75.3%, while limiting the error rate of a binary classification decision tree fall detection algorithm to 7.1%.Falls are a serious threat to the health of older people. A wearable fall detector can automatically detect the occurrence of a fall and alert a caregiver or an emergency response service so they may deliver immediate assistance, improving the chances of recovering from fall-related injuries. One constraint of such a wearable technology is its limited battery life. Thus, minimization of power consumption is an important design concern, all the while maintaining satisfactory accuracy of the fall detection algorithms implemented on the wearable device. This paper proposes an approach for selecting power-efficient signal features such that the minimum desirable fall detection accuracy is assured. Using data collected in simulated falls, simulated activities of daily living, and real free-living trials, all using young volunteers, the proposed approach selects four features from a set of ten commonly used features, providing a power saving of 75.3%, while limiting the error rate of a binary classification decision tree fall detection algorithm to 7.1%.

  3. Invariant target detection by a correlation radiometer

    NASA Astrophysics Data System (ADS)

    Murza, L. P.

    1986-12-01

    The paper is concerned with the problem of the optimal detection of a heat-emitting target by a two-channel radiometer with an unstable amplification circuit. An expression is obtained for an asymptotically sufficient detection statistic which is invariant to changes in the amplification coefficients of the channels. The algorithm proposed here can be implemented numerically using a relatively simple program.

  4. Understanding conflict-resolution taskload: Implementing advisory conflict-detection and resolution algorithms in an airspace

    NASA Astrophysics Data System (ADS)

    Vela, Adan Ernesto

    2011-12-01

    From 2010 to 2030, the number of instrument flight rules aircraft operations handled by Federal Aviation Administration en route traffic centers is predicted to increase from approximately 39 million flights to 64 million flights. The projected growth in air transportation demand is likely to result in traffic levels that exceed the abilities of the unaided air traffic controller in managing, separating, and providing services to aircraft. Consequently, the Federal Aviation Administration, and other air navigation service providers around the world, are making several efforts to improve the capacity and throughput of existing airspaces. Ultimately, the stated goal of the Federal Aviation Administration is to triple the available capacity of the National Airspace System by 2025. In an effort to satisfy air traffic demand through the increase of airspace capacity, air navigation service providers are considering the inclusion of advisory conflict-detection and resolution systems. In a human-in-the-loop framework, advisory conflict-detection and resolution decision-support tools identify potential conflicts and propose resolution commands for the air traffic controller to verify and issue to aircraft. A number of researchers and air navigation service providers hypothesize that the inclusion of combined conflict-detection and resolution tools into air traffic control systems will reduce or transform controller workload and enable the required increases in airspace capacity. In an effort to understand the potential workload implications of introducing advisory conflict-detection and resolution tools, this thesis provides a detailed study of the conflict event process and the implementation of conflict-detection and resolution algorithms. Specifically, the research presented here examines a metric of controller taskload: how many resolution commands an air traffic controller issues under the guidance of a conflict-detection and resolution decision-support tool. The goal of the research is to understand how the formulation, capabilities, and implementation of conflict-detection and resolution tools affect the controller taskload (system demands) associated with the conflict-resolution process, and implicitly the controller workload (physical and psychological demands). Furthermore this thesis seeks to establish best practices for the design of future conflict-detection and resolution systems. To generalize conclusions on the conflict-resolution taskload and best design practices of conflict-detection and resolution systems, this thesis focuses on abstracting and parameterizing the behaviors and capabilities of the advisory tools. Ideally, this abstraction of advisory decision-support tools serves as an alternative to exhaustively designing tools, implementing them in high-fidelity simulations, and analyzing their conflict-resolution taskload. Such an approach of simulating specific conflict-detection and resolution systems limits the type of conclusions that can be drawn concerning the design of more generic algorithms. In the process of understanding conflict-detection and resolution systems, evidence in the thesis reveals that the most effective approach to reducing conflict-resolution taskload is to improve conflict-detection systems. Furthermore, studies in the this thesis indicate that there is significant exibility in the design of conflict-resolution algorithms.

  5. Low-power wearable respiratory sound sensing.

    PubMed

    Oletic, Dinko; Arsenali, Bruno; Bilas, Vedran

    2014-04-09

    Building upon the findings from the field of automated recognition of respiratory sound patterns, we propose a wearable wireless sensor implementing on-board respiratory sound acquisition and classification, to enable continuous monitoring of symptoms, such as asthmatic wheezing. Low-power consumption of such a sensor is required in order to achieve long autonomy. Considering that the power consumption of its radio is kept minimal if transmitting only upon (rare) occurrences of wheezing, we focus on optimizing the power consumption of the digital signal processor (DSP). Based on a comprehensive review of asthmatic wheeze detection algorithms, we analyze the computational complexity of common features drawn from short-time Fourier transform (STFT) and decision tree classification. Four algorithms were implemented on a low-power TMS320C5505 DSP. Their classification accuracies were evaluated on a dataset of prerecorded respiratory sounds in two operating scenarios of different detection fidelities. The execution times of all algorithms were measured. The best classification accuracy of over 92%, while occupying only 2.6% of the DSP's processing time, is obtained for the algorithm featuring the time-frequency tracking of shapes of crests originating from wheezing, with spectral features modeled using energy.

  6. GridMass: a fast two-dimensional feature detection method for LC/MS.

    PubMed

    Treviño, Victor; Yañez-Garza, Irma-Luz; Rodriguez-López, Carlos E; Urrea-López, Rafael; Garza-Rodriguez, Maria-Lourdes; Barrera-Saldaña, Hugo-Alberto; Tamez-Peña, José G; Winkler, Robert; Díaz de-la-Garza, Rocío-Isabel

    2015-01-01

    One of the initial and critical procedures for the analysis of metabolomics data using liquid chromatography and mass spectrometry is feature detection. Feature detection is the process to detect boundaries of the mass surface from raw data. It consists of detected abundances arranged in a two-dimensional (2D) matrix of mass/charge and elution time. MZmine 2 is one of the leading software environments that provide a full analysis pipeline for these data. However, the feature detection algorithms provided in MZmine 2 are based mainly on the analysis of one-dimension at a time. We propose GridMass, an efficient algorithm for 2D feature detection. The algorithm is based on landing probes across the chromatographic space that are moved to find local maxima providing accurate boundary estimations. We tested GridMass on a controlled marker experiment, on plasma samples, on plant fruits, and in a proteome sample. Compared with other algorithms, GridMass is faster and may achieve comparable or better sensitivity and specificity. As a proof of concept, GridMass has been implemented in Java under the MZmine 2 environment and is available at http://www.bioinformatica.mty.itesm.mx/GridMass and MASSyPup. It has also been submitted to the MZmine 2 developing community. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Automatic detection of erythemato-squamous diseases using k-means clustering.

    PubMed

    Ubeyli, Elif Derya; Doğdu, Erdoğan

    2010-04-01

    A new approach based on the implementation of k-means clustering is presented for automated detection of erythemato-squamous diseases. The purpose of clustering techniques is to find a structure for the given data by finding similarities between data according to data characteristics. The studied domain contained records of patients with known diagnosis. The k-means clustering algorithm's task was to classify the data points, in this case the patients with attribute data, to one of the five clusters. The algorithm was used to detect the five erythemato-squamous diseases when 33 features defining five disease indications were used. The purpose is to determine an optimum classification scheme for this problem. The present research demonstrated that the features well represent the erythemato-squamous diseases and the k-means clustering algorithm's task achieved high classification accuracies for only five erythemato-squamous diseases.

  8. Fetal heart rate deceleration detection using a discrete cosine transform implementation of singular spectrum analysis.

    PubMed

    Warrick, P A; Precup, D; Hamilton, E F; Kearney, R E

    2007-01-01

    To develop a singular-spectrum analysis (SSA) based change-point detection algorithm applicable to fetal heart rate (FHR) monitoring to improve the detection of deceleration events. We present a method for decomposing a signal into near-orthogonal components via the discrete cosine transform (DCT) and apply this in a novel online manner to change-point detection based on SSA. The SSA technique forms models of the underlying signal that can be compared over time; models that are sufficiently different indicate signal change points. To adapt the algorithm to deceleration detection where many successive similar change events can occur, we modify the standard SSA algorithm to hold the reference model constant under such conditions, an approach that we term "base-hold SSA". The algorithm is applied to a database of 15 FHR tracings that have been preprocessed to locate candidate decelerations and is compared to the markings of an expert obstetrician. Of the 528 true and 1285 false decelerations presented to the algorithm, the base-hold approach improved on standard SSA, reducing the number of missed decelerations from 64 to 49 (21.9%) while maintaining the same reduction in false-positives (278). The standard SSA assumption that changes are infrequent does not apply to FHR analysis where decelerations can occur successively and in close proximity; our base-hold SSA modification improves detection of these types of event series.

  9. Electronic Sleep Stage Classifiers: A Survey and VLSI Design Methodology.

    PubMed

    Kassiri, Hossein; Chemparathy, Aditi; Salam, M Tariqus; Boyce, Richard; Adamantidis, Antoine; Genov, Roman

    2017-02-01

    First, existing sleep stage classifier sensors and algorithms are reviewed and compared in terms of classification accuracy, level of automation, implementation complexity, invasiveness, and targeted application. Next, the implementation of a miniature microsystem for low-latency automatic sleep stage classification in rodents is presented. The classification algorithm uses one EMG (electromyogram) and two EEG (electroencephalogram) signals as inputs in order to detect REM (rapid eye movement) sleep, and is optimized for low complexity and low power consumption. It is implemented in an on-board low-power FPGA connected to a multi-channel neural recording IC, to achieve low-latency (order of 1 ms or less) classification. Off-line experimental results using pre-recorded signals from nine mice show REM detection sensitivity and specificity of 81.69% and 93.86%, respectively, with the maximum latency of 39 [Formula: see text]. The device is designed to be used in a non-disruptive closed-loop REM sleep suppression microsystem, for future studies of the effects of REM sleep deprivation on memory consolidation.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata; Rao, Nageswara S; Wu, Qishi

    There have been increasingly large deployments of radiation detection networks that require computationally fast algorithms to produce prompt results over ad-hoc sub-networks of mobile devices, such as smart-phones. These algorithms are in sharp contrast to complex network algorithms that necessitate all measurements to be sent to powerful central servers. In this work, at individual sensors, we employ Wald-statistic based detection algorithms which are computationally very fast, and are implemented as one of three Z-tests and four chi-square tests. At fusion center, we apply the K-out-of-N fusion to combine the sensors hard decisions. We characterize the performance of detection methods bymore » deriving analytical expressions for the distributions of underlying test statistics, and by analyzing the fusion performances in terms of K, N, and the false-alarm rates of individual detectors. We experimentally validate our methods using measurements from indoor and outdoor characterization tests of the Intelligence Radiation Sensors Systems (IRSS) program. In particular, utilizing the outdoor measurements, we construct two important real-life scenarios, boundary surveillance and portal monitoring, and present the results of our algorithms.« less

  11. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts of audit-log data through resource-limited nodes and locates routines closer to that data. Performance of the unsupervised algorithms is evaluated against the network intrusions of black hole, flooding, Sybil and other denial-of-service attacks in simulations of published scenarios. Results for scenarios with intentionally malfunctioning sensors show the robustness of the two-stage approach to intrusion anomalies.

  12. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar

    PubMed Central

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun

    2018-01-01

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256×13 real-time radar image display with a throughput of 28.2 frames per second. PMID:29621170

  13. Runway Safety Monitor Algorithm for Runway Incursion Detection and Alerting

    NASA Technical Reports Server (NTRS)

    Green, David F., Jr.; Jones, Denise R. (Technical Monitor)

    2002-01-01

    The Runway Safety Monitor (RSM) is an algorithm for runway incursion detection and alerting that was developed in support of NASA's Runway Incursion Prevention System (RIPS) research conducted under the NASA Aviation Safety Program's Synthetic Vision System element. The RSM algorithm provides pilots with enhanced situational awareness and warnings of runway incursions in sufficient time to take evasive action and avoid accidents during landings, takeoffs, or taxiing on the runway. The RSM currently runs as a component of the NASA Integrated Display System, an experimental avionics software system for terminal area and surface operations. However, the RSM algorithm can be implemented as a separate program to run on any aircraft with traffic data link capability. The report documents the RSM software and describes in detail how RSM performs runway incursion detection and alerting functions for NASA RIPS. The report also describes the RIPS flight tests conducted at the Dallas-Ft Worth International Airport (DFW) during September and October of 2000, and the RSM performance results and lessons learned from those flight tests.

  14. Evaluation of hybrids algorithms for mass detection in digitalized mammograms

    NASA Astrophysics Data System (ADS)

    Cordero, José; Garzón Reyes, Johnson

    2011-01-01

    The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.

  15. Operational Performance Analysis of Passive Acoustic Monitoring for Killer Whales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Fu, Tao; Ren, Huiying

    2011-09-30

    For the planned tidal turbine site in Puget Sound, WA, the main concern is to protect Southern Resident Killer Whales (SRKW) due to their Endangered Species Act status. A passive acoustic monitoring system is proposed because the whales emit vocalizations that can be detected by a passive system. The algorithm for detection is implemented in two stages. The first stage is an energy detector designed to detect candidate signals. The second stage is a spectral classifier that is designed to reduce false alarms. The evaluation presented here of the detection algorithm incorporates behavioral models of the species of interest, environmentalmore » models of noise levels and potential false alarm sources to provide a realistic characterization of expected operational performance.« less

  16. Implementation of Multipattern String Matching Accelerated with GPU for Intrusion Detection System

    NASA Astrophysics Data System (ADS)

    Nehemia, Rangga; Lim, Charles; Galinium, Maulahikmah; Rinaldi Widianto, Ahmad

    2017-04-01

    As Internet-related security threats continue to increase in terms of volume and sophistication, existing Intrusion Detection System is also being challenged to cope with the current Internet development. Multi Pattern String Matching algorithm accelerated with Graphical Processing Unit is being utilized to improve the packet scanning performance of the IDS. This paper implements a Multi Pattern String Matching algorithm, also called Parallel Failureless Aho Corasick accelerated with GPU to improve the performance of IDS. OpenCL library is used to allow the IDS to support various GPU, including popular GPU such as NVIDIA and AMD, used in our research. The experiment result shows that the application of Multi Pattern String Matching using GPU accelerated platform provides a speed up, by up to 141% in term of throughput compared to the previous research.

  17. Design and implementation of an intelligent belt system using accelerometer.

    PubMed

    Liu, Botong; Wang, Duo; Li, Sha; Nie, Xuhui; Xu, Shan; Jiao, Bingli; Duan, Xiaohui; Huang, Anpeng

    2015-01-01

    Activity monitor systems are increasing used recently. They are important for athletes and casual users to manage physical activity during daily exercises. In this paper, we use a triaxial accelerometer to design and implement an intelligent belt system, which can detect the user's step and flapping motion. In our system, a wearable intelligent belt is worn on the user's waist to collect activity acceleration signals. We present a step detection algorithm to detect real-time human step, which has high accuracy and low complexity. In our system, an Android App is developed to manage the intelligent belt. We also propose a protocol, which can guarantee data transmission between smartphones and wearable belt effectively and efficiently. In addition, when users flap the belt in emergency, the smartphone will receive alarm signal sending by the belt, and then notifies the emergency contact person, which can be really helpful for users in danger. Our experiment results show our system can detect physical activities with high accuracy (overall accuracy of our algorithm is above 95%) and has an effective alarm subsystem, which is significant for the practical use.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less

  19. Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves.

    PubMed

    Elgendi, Mohamed; Eskofier, Bjoern; Abbott, Derek

    2015-07-21

    There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design.

  20. Detection of kinetic change points in piece-wise linear single molecule motion

    NASA Astrophysics Data System (ADS)

    Hill, Flynn R.; van Oijen, Antoine M.; Duderstadt, Karl E.

    2018-03-01

    Single-molecule approaches present a powerful way to obtain detailed kinetic information at the molecular level. However, the identification of small rate changes is often hindered by the considerable noise present in such single-molecule kinetic data. We present a general method to detect such kinetic change points in trajectories of motion of processive single molecules having Gaussian noise, with a minimum number of parameters and without the need of an assumed kinetic model beyond piece-wise linearity of motion. Kinetic change points are detected using a likelihood ratio test in which the probability of no change is compared to the probability of a change occurring, given the experimental noise. A predetermined confidence interval minimizes the occurrence of false detections. Applying the method recursively to all sub-regions of a single molecule trajectory ensures that all kinetic change points are located. The algorithm presented allows rigorous and quantitative determination of kinetic change points in noisy single molecule observations without the need for filtering or binning, which reduce temporal resolution and obscure dynamics. The statistical framework for the approach and implementation details are discussed. The detection power of the algorithm is assessed using simulations with both single kinetic changes and multiple kinetic changes that typically arise in observations of single-molecule DNA-replication reactions. Implementations of the algorithm are provided in ImageJ plugin format written in Java and in the Julia language for numeric computing, with accompanying Jupyter Notebooks to allow reproduction of the analysis presented here.

  1. Social Circles Detection from Ego Network and Profile Information

    DTIC Science & Technology

    2014-12-19

    response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing... algorithm used to infer k-clique communities is expo- nential, which makes this technique unfeasible when treating egonets with a large number of users...atic when considering RBMs. This inconvenient was positively solved implementing a sparsity treatment with the RBM algorithm . (ii) The ground truth was

  2. An Energy efficient application specific integrated circuit for electrocardiogram feature detection and its potential for ambulatory cardiovascular disease detection

    PubMed Central

    Bhaumik, Basabi

    2016-01-01

    A novel algorithm based on forward search is developed for real-time electrocardiogram (ECG) signal processing and implemented in application specific integrated circuit (ASIC) for QRS complex related cardiovascular disease diagnosis. The authors have evaluated their algorithm using MIT-BIH database and achieve sensitivity of 99.86% and specificity of 99.93% for QRS complex peak detection. In this Letter, Physionet PTB diagnostic ECG database is used for QRS complex related disease detection. An ASIC for cardiovascular disease detection is fabricated using 130-nm CMOS high-speed process technology. The area of the ASIC is 0.5 mm2. The power dissipation is 1.73 μW at the operating frequency of 1 kHz with a supply voltage of 0.6 V. The output from the ASIC is fed to their Android application that generates diagnostic report and can be sent to a cardiologist through email. Their ASIC result shows average failed detection rate of 0.16% for six leads data of 290 patients in PTB diagnostic ECG database. They also have implemented a low-leakage version of their ASIC. The ASIC dissipates only 45 pJ with a supply voltage of 0.9 V. Their proposed ASIC is most suitable for energy efficient telemetry cardiovascular disease detection system. PMID:27284458

  3. An Energy efficient application specific integrated circuit for electrocardiogram feature detection and its potential for ambulatory cardiovascular disease detection.

    PubMed

    Jain, Sanjeev Kumar; Bhaumik, Basabi

    2016-03-01

    A novel algorithm based on forward search is developed for real-time electrocardiogram (ECG) signal processing and implemented in application specific integrated circuit (ASIC) for QRS complex related cardiovascular disease diagnosis. The authors have evaluated their algorithm using MIT-BIH database and achieve sensitivity of 99.86% and specificity of 99.93% for QRS complex peak detection. In this Letter, Physionet PTB diagnostic ECG database is used for QRS complex related disease detection. An ASIC for cardiovascular disease detection is fabricated using 130-nm CMOS high-speed process technology. The area of the ASIC is 0.5 mm(2). The power dissipation is 1.73 μW at the operating frequency of 1 kHz with a supply voltage of 0.6 V. The output from the ASIC is fed to their Android application that generates diagnostic report and can be sent to a cardiologist through email. Their ASIC result shows average failed detection rate of 0.16% for six leads data of 290 patients in PTB diagnostic ECG database. They also have implemented a low-leakage version of their ASIC. The ASIC dissipates only 45 pJ with a supply voltage of 0.9 V. Their proposed ASIC is most suitable for energy efficient telemetry cardiovascular disease detection system.

  4. An optimization of the FPGA trigger based on the artificial neural network for a detection of neutrino-origin showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szadkowski, Zbigniew; Glas, Dariusz; Pytel, Krzysztof

    Observations of ultra-high energy neutrinos became a priority in experimental astro-particle physics. Up to now, the Pierre Auger Observatory did not find any candidate on a neutrino event. This imposes competitive limits to the diffuse flux of ultra-high energy neutrinos in the EeV range and above. A very low rate of events potentially generated by neutrinos is a significant challenge for a detection technique and requires both sophisticated algorithms and high-resolution hardware. A trigger based on a artificial neural network was implemented into the Cyclone{sup R} V E FPGA 5CEFA9F31I7. The prototype Front-End boards for Auger-Beyond-2015 with Cyclone{sup R} Vmore » E can test the neural network algorithm in real pampas conditions in 2015. Showers for muon and tau neutrino initiating particles on various altitudes, angles and energies were simulated in CORSICA and Offline platforms giving pattern of ADC traces in Auger water Cherenkov detectors. The 3-layer 12-10-1 neural network was taught in MATLAB by simulated ADC traces according the Levenberg-Marquardt algorithm. Results show that a probability of a ADC traces generation is very low due to a small neutrino cross-section. Nevertheless, ADC traces, if occur, for 1-10 EeV showers are relatively short and can be analyzed by 16-point input algorithm. For 100 EeV range traces are much longer, but with significantly higher amplitudes, which can be detected by standard threshold algorithms. We optimized the coefficients from MATLAB to get a maximal range of potentially registered events and for fixed-point FPGA processing to minimize calculation errors. Currently used Front-End boards based on no-more produced ACEXR PLDs and obsolete Cyclone{sup R} FPGAs allow an implementation of relatively simple threshold algorithms for triggers. New sophisticated trigger implemented in Cyclone{sup R} V E FPGAs with large amount of DSP blocks, embedded memory running with 120 - 160 MHz sampling may support to discover neutrino events in the Pierre Auger Observatory. (authors)« less

  5. Performance Analysis of a Hardware Implemented Complex Signal Kurtosis Radio-Frequency Interference Detector

    NASA Technical Reports Server (NTRS)

    Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark

    2016-01-01

    Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves.

  6. Development and validation of a machine learning algorithm and hybrid system to predict the need for life-saving interventions in trauma patients.

    PubMed

    Liu, Nehemiah T; Holcomb, John B; Wade, Charles E; Batchinsky, Andriy I; Cancio, Leopoldo C; Darrah, Mark I; Salinas, José

    2014-02-01

    Accurate and effective diagnosis of actual injury severity can be problematic in trauma patients. Inherent physiologic compensatory mechanisms may prevent accurate diagnosis and mask true severity in many circumstances. The objective of this project was the development and validation of a multiparameter machine learning algorithm and system capable of predicting the need for life-saving interventions (LSIs) in trauma patients. Statistics based on means, slopes, and maxima of various vital sign measurements corresponding to 79 trauma patient records generated over 110,000 feature sets, which were used to develop, train, and implement the system. Comparisons among several machine learning models proved that a multilayer perceptron would best implement the algorithm in a hybrid system consisting of a machine learning component and basic detection rules. Additionally, 295,994 feature sets from 82 h of trauma patient data showed that the system can obtain 89.8 % accuracy within 5 min of recorded LSIs. Use of machine learning technologies combined with basic detection rules provides a potential approach for accurately assessing the need for LSIs in trauma patients. The performance of this system demonstrates that machine learning technology can be implemented in a real-time fashion and potentially used in a critical care environment.

  7. Detection of insect damage in almonds

    NASA Astrophysics Data System (ADS)

    Kim, Soowon; Schatzki, Thomas F.

    1999-01-01

    Pinhole insect damage in natural almonds is very difficult to detect on-line. Further, evidence exists relating insect damage to aflatoxin contamination. Hence, for quality and health reasons, methods to detect and remove such damaged nuts are of great importance in this study, we explored the possibility of using x-ray imaging to detect pinhole damage in almonds by insects. X-ray film images of about 2000 almonds and x-ray linescan images of only 522 pinhole damaged almonds were obtained. The pinhole damaged region appeared slightly darker than non-damaged region in x-ray negative images. A machine recognition algorithm was developed to detect these darker regions. The algorithm used the first order and the second order information to identify the damaged region. To reduce the possibility of false positive results due to germ region in high resolution images, germ detection and removal routines were also included. With film images, the algorithm showed approximately an 81 percent correct recognition ratio with only 1 percent false positives whereas line scan images correctly recognized 65 percent of pinholes with about 9 percent false positives. The algorithms was very fast and efficient requiring only minimal computation time. If implemented on line, theoretical throughput of this recognition system would be 66 nuts/second.

  8. Comparison of spike-sorting algorithms for future hardware implementation.

    PubMed

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  9. Tele-operated search robot for human detection using histogram of oriented objects

    NASA Astrophysics Data System (ADS)

    Cruz, Febus Reidj G.; Avendaño, Glenn O.; Manlises, Cyrel O.; Avellanosa, James Jason G.; Abina, Jyacinth Camille F.; Masaquel, Albert M.; Siapno, Michael Lance O.; Chung, Wen-Yaw

    2017-02-01

    Disasters such as typhoons, tornadoes, and earthquakes are inevitable. Aftermaths of these disasters include the missing people. Using robots with human detection capabilities to locate the missing people, can dramatically reduce the harm and risk to those who work in such circumstances. This study aims to: design and build a tele-operated robot; implement in MATLAB an algorithm for the detection of humans; and create a database of human identification based on various positions, angles, light intensity, as well as distances from which humans will be identified. Different light intensities were made by using Photoshop to simulate smoke, dust and water drops conditions. After processing the image, the system can indicate either a human is detected or not detected. Testing with bodies covered was also conducted to test the algorithm's robustness. Based on the results, the algorithm can detect humans with full body shown. For upright and lying positions, detection can happen from 8 feet to 20 feet. For sitting position, detection can happen from 2 feet to 20 feet with slight variances in results because of different lighting conditions. The distances greater than 20 feet, no humans can be processed or false negatives can occur. For bodies covered, the algorithm can detect humans in cases made under given circumstances. On three positions, humans can be detected from 0 degrees to 180 degrees under normal, with smoke, with dust, and with water droplet conditions. This study was able to design and build a tele-operated robot with MATLAB algorithm that can detect humans with an overall precision of 88.30%, from which a database was created for human identification based on various conditions, where humans will be identified.

  10. Fusion of multiple quadratic penalty function support vector machines (QPFSVM) for automated sea mine detection and classification

    NASA Astrophysics Data System (ADS)

    Dobeck, Gerald J.; Cobb, J. Tory

    2002-08-01

    The high-resolution sonar is one of the principal sensors used by the Navy to detect and classify sea mines in minehunting operations. For such sonar systems, substantial effort has been devoted to the development of automated detection and classification (D/C) algorithms. These have been spurred by several factors including (1) aids for operators to reduce work overload, (2) more optimal use of all available data, and (3) the introduction of unmanned minehunting systems. The environments where sea mines are typically laid (harbor areas, shipping lanes, and the littorals) give rise to many false alarms caused by natural, biologic, and man-made clutter. The objective of the automated D/C algorithms is to eliminate most of these false alarms while still maintaining a very high probability of mine detection and classification (PdPc). In recent years, the benefits of fusing the outputs of multiple D/C algorithms have been studied. We refer to this as Algorithm Fusion. The results have been remarkable, including reliable robustness to new environments. The Quadratic Penalty Function Support Vector Machine (QPFSVM) algorithm to aid in the automated detection and classification of sea mines is introduced in this paper. The QPFSVM algorithm is easy to train, simple to implement, and robust to feature space dimension. Outputs of successive SVM algorithms are cascaded in stages (fused) to improve the Probability of Classification (Pc) and reduce the number of false alarms. Even though our experience has been gained in the area of sea mine detection and classification, the principles described herein are general and can be applied to fusion of any D/C problem (e.g., automated medical diagnosis or automatic target recognition for ballistic missile defense).

  11. Evaluating and implementing temporal, spatial, and spatio-temporal methods for outbreak detection in a local syndromic surveillance system

    PubMed Central

    Lall, Ramona; Levin-Rector, Alison; Sell, Jessica; Paladini, Marc; Konty, Kevin J.; Olson, Don; Weiss, Don

    2017-01-01

    The New York City Department of Health and Mental Hygiene has operated an emergency department syndromic surveillance system since 2001, using temporal and spatial scan statistics run on a daily basis for cluster detection. Since the system was originally implemented, a number of new methods have been proposed for use in cluster detection. We evaluated six temporal and four spatial/spatio-temporal detection methods using syndromic surveillance data spiked with simulated injections. The algorithms were compared on several metrics, including sensitivity, specificity, positive predictive value, coherence, and timeliness. We also evaluated each method’s implementation, programming time, run time, and the ease of use. Among the temporal methods, at a set specificity of 95%, a Holt-Winters exponential smoother performed the best, detecting 19% of the simulated injects across all shapes and sizes, followed by an autoregressive moving average model (16%), a generalized linear model (15%), a modified version of the Early Aberration Reporting System’s C2 algorithm (13%), a temporal scan statistic (11%), and a cumulative sum control chart (<2%). Of the spatial/spatio-temporal methods we tested, a spatial scan statistic detected 3% of all injects, a Bayes regression found 2%, and a generalized linear mixed model and a space-time permutation scan statistic detected none at a specificity of 95%. Positive predictive value was low (<7%) for all methods. Overall, the detection methods we tested did not perform well in identifying the temporal and spatial clusters of cases in the inject dataset. The spatial scan statistic, our current method for spatial cluster detection, performed slightly better than the other tested methods across different inject magnitudes and types. Furthermore, we found the scan statistics, as applied in the SaTScan software package, to be the easiest to program and implement for daily data analysis. PMID:28886112

  12. Evaluating and implementing temporal, spatial, and spatio-temporal methods for outbreak detection in a local syndromic surveillance system.

    PubMed

    Mathes, Robert W; Lall, Ramona; Levin-Rector, Alison; Sell, Jessica; Paladini, Marc; Konty, Kevin J; Olson, Don; Weiss, Don

    2017-01-01

    The New York City Department of Health and Mental Hygiene has operated an emergency department syndromic surveillance system since 2001, using temporal and spatial scan statistics run on a daily basis for cluster detection. Since the system was originally implemented, a number of new methods have been proposed for use in cluster detection. We evaluated six temporal and four spatial/spatio-temporal detection methods using syndromic surveillance data spiked with simulated injections. The algorithms were compared on several metrics, including sensitivity, specificity, positive predictive value, coherence, and timeliness. We also evaluated each method's implementation, programming time, run time, and the ease of use. Among the temporal methods, at a set specificity of 95%, a Holt-Winters exponential smoother performed the best, detecting 19% of the simulated injects across all shapes and sizes, followed by an autoregressive moving average model (16%), a generalized linear model (15%), a modified version of the Early Aberration Reporting System's C2 algorithm (13%), a temporal scan statistic (11%), and a cumulative sum control chart (<2%). Of the spatial/spatio-temporal methods we tested, a spatial scan statistic detected 3% of all injects, a Bayes regression found 2%, and a generalized linear mixed model and a space-time permutation scan statistic detected none at a specificity of 95%. Positive predictive value was low (<7%) for all methods. Overall, the detection methods we tested did not perform well in identifying the temporal and spatial clusters of cases in the inject dataset. The spatial scan statistic, our current method for spatial cluster detection, performed slightly better than the other tested methods across different inject magnitudes and types. Furthermore, we found the scan statistics, as applied in the SaTScan software package, to be the easiest to program and implement for daily data analysis.

  13. Adaptive DFT-Based Interferometer Fringe Tracking

    NASA Astrophysics Data System (ADS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.

  14. Adaptive DFT-Based Interferometer Fringe Tracking

    NASA Astrophysics Data System (ADS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2005-12-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately [InlineEquation not available: see fulltext.] milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.

  15. A synthetic genetic edge detection program.

    PubMed

    Tabor, Jeffrey J; Salis, Howard M; Simpson, Zachary Booth; Chevalier, Aaron A; Levskaya, Anselm; Marcotte, Edward M; Voigt, Christopher A; Ellington, Andrew D

    2009-06-26

    Edge detection is a signal processing algorithm common in artificial intelligence and image recognition programs. We have constructed a genetically encoded edge detection algorithm that programs an isogenic community of E. coli to sense an image of light, communicate to identify the light-dark edges, and visually present the result of the computation. The algorithm is implemented using multiple genetic circuits. An engineered light sensor enables cells to distinguish between light and dark regions. In the dark, cells produce a diffusible chemical signal that diffuses into light regions. Genetic logic gates are used so that only cells that sense light and the diffusible signal produce a positive output. A mathematical model constructed from first principles and parameterized with experimental measurements of the component circuits predicts the performance of the complete program. Quantitatively accurate models will facilitate the engineering of more complex biological behaviors and inform bottom-up studies of natural genetic regulatory networks.

  16. Support vector machine as a binary classifier for automated object detection in remotely sensed data

    NASA Astrophysics Data System (ADS)

    Wardaya, P. D.

    2014-02-01

    In the present paper, author proposes the application of Support Vector Machine (SVM) for the analysis of satellite imagery. One of the advantages of SVM is that, with limited training data, it may generate comparable or even better results than the other methods. The SVM algorithm is used for automated object detection and characterization. Specifically, the SVM is applied in its basic nature as a binary classifier where it classifies two classes namely, object and background. The algorithm aims at effectively detecting an object from its background with the minimum training data. The synthetic image containing noises is used for algorithm testing. Furthermore, it is implemented to perform remote sensing image analysis such as identification of Island vegetation, water body, and oil spill from the satellite imagery. It is indicated that SVM provides the fast and accurate analysis with the acceptable result.

  17. A Synthetic Genetic Edge Detection Program

    PubMed Central

    Tabor, Jeffrey J.; Salis, Howard; Simpson, Zachary B.; Chevalier, Aaron A.; Levskaya, Anselm; Marcotte, Edward M.; Voigt, Christopher A.; Ellington, Andrew D.

    2009-01-01

    Summary Edge detection is a signal processing algorithm common in artificial intelligence and image recognition programs. We have constructed a genetically encoded edge detection algorithm that programs an isogenic community of E.coli to sense an image of light, communicate to identify the light-dark edges, and visually present the result of the computation. The algorithm is implemented using multiple genetic circuits. An engineered light sensor enables cells to distinguish between light and dark regions. In the dark, cells produce a diffusible chemical signal that diffuses into light regions. Genetic logic gates are used so that only cells that sense light and the diffusible signal produce a positive output. A mathematical model constructed from first principles and parameterized with experimental measurements of the component circuits predicts the performance of the complete program. Quantitatively accurate models will facilitate the engineering of more complex biological behaviors and inform bottom-up studies of natural genetic regulatory networks. PMID:19563759

  18. Automatic identification of epileptic seizures from EEG signals using linear programming boosting.

    PubMed

    Hassan, Ahnaf Rashik; Subasi, Abdulhamit

    2016-11-01

    Computerized epileptic seizure detection is essential for expediting epilepsy diagnosis and research and for assisting medical professionals. Moreover, the implementation of an epilepsy monitoring device that has low power and is portable requires a reliable and successful seizure detection scheme. In this work, the problem of automated epilepsy seizure detection using singe-channel EEG signals has been addressed. At first, segments of EEG signals are decomposed using a newly proposed signal processing scheme, namely complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). Six spectral moments are extracted from the CEEMDAN mode functions and train and test matrices are formed afterward. These matrices are fed into the classifier to identify epileptic seizures from EEG signal segments. In this work, we implement an ensemble learning based machine learning algorithm, namely linear programming boosting (LPBoost) to perform classification. The efficacy of spectral features in the CEEMDAN domain is validated by graphical and statistical analyses. The performance of CEEMDAN is compared to those of its predecessors to further inspect its suitability. The effectiveness and the appropriateness of LPBoost are demonstrated as opposed to the commonly used classification models. Resubstitution and 10 fold cross-validation error analyses confirm the superior algorithm performance of the proposed scheme. The algorithmic performance of our epilepsy seizure identification scheme is also evaluated against state-of-the-art works in the literature. Experimental outcomes manifest that the proposed seizure detection scheme performs better than the existing works in terms of accuracy, sensitivity, specificity, and Cohen's Kappa coefficient. It can be anticipated that owing to its use of only one channel of EEG signal, the proposed method will be suitable for device implementation, eliminate the onus of clinicians for analyzing a large bulk of data manually, and expedite epilepsy diagnosis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Astrophysical data mining with GPU. A case study: Genetic classification of globular clusters

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Garofalo, M.; Brescia, M.; Paolillo, M.; Pescape', A.; Longo, G.; Ventre, G.

    2014-01-01

    We present a multi-purpose genetic algorithm, designed and implemented with GPGPU/CUDA parallel computing technology. The model was derived from our CPU serial implementation, named GAME (Genetic Algorithm Model Experiment). It was successfully tested and validated on the detection of candidate Globular Clusters in deep, wide-field, single band HST images. The GPU version of GAME will be made available to the community by integrating it into the web application DAMEWARE (DAta Mining Web Application REsource, http://dame.dsf.unina.it/beta_info.html), a public data mining service specialized on massive astrophysical data. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm leads to a speedup of a factor of 200× in the training phase with respect to the CPU based version.

  20. Network control processor for a TDMA system

    NASA Astrophysics Data System (ADS)

    Suryadevara, Omkarmurthy; Debettencourt, Thomas J.; Shulman, R. B.

    Two unique aspects of designing a network control processor (NCP) to monitor and control a demand-assigned, time-division multiple-access (TDMA) network are described. The first involves the implementation of redundancy by synchronizing the databases of two geographically remote NCPs. The two sets of databases are kept in synchronization by collecting data on both systems, transferring databases, sending incremental updates, and the parallel updating of databases. A periodic audit compares the checksums of the databases to ensure synchronization. The second aspect involves the use of a tracking algorithm to dynamically reallocate TDMA frame space. This algorithm detects and tracks current and long-term load changes in the network. When some portions of the network are overloaded while others have excess capacity, the algorithm automatically calculates and implements a new burst time plan.

  1. Assessing and validating RST-FIRES on MSG-SEVIRI data by means a Total Validation Approach (TVA).

    NASA Astrophysics Data System (ADS)

    Filizzola, Carolina; Corrado, Rosita; Marchese, Francesco; Mazzeo, %Giuseppe; Paciello, Rossana; Pergola, Nicola; Tramutoli, Valerio

    2015-04-01

    Several fire detection methods have been developed through the years for detecting forest fires from space. These algorithms (which may be grouped in single channel, multichannel and contextual algorithms) are generally based on the use of fixed thresholds that, being intrinsically exposed to false alarm proliferation, are often used in a conservative way. As a consequence, most of satellite-based algorithms for fire detection show low sensitivity resulting not suitable in operational contexts. In this work, the RST-FIRES algorithm, which is based on an original multi-temporal scheme of satellite data analysis (RST-Robust Satellite Techniques), is presented. The implementation of RST-FIRES on data provided by Spinning Enhanced Visible and InfraRed Imager (SEVIRI) onboard Meteosat Second Generation (MSG) that, offering the best revisit time (i.e. 15 minutes), can be successfully used for detecting fires at early stage, is described here. Moreover, results of a Total Validation Approach (TVA) experimented both in Northern and Southern Italy, in collaboration with local and regional civil protection agencies, are also reported. In particular, TVA allowed us to assess RST-FIRES detections by means of ground check and aerial surveys, demonstrating the good performances offered by RST-FIRES using MSG-SEVIRI data. Indeed, this algorithm was capable of detecting several fires that for their features (e.g., small size, short time duration) would not have appeared in the official reports, highlighting a significant improvement in terms of sensitivity in comparison with other established satellite-based fire detection techniques still preserving a high confidence level of detection.

  2. Algorithm-Based Fault Tolerance for Numerical Subroutines

    NASA Technical Reports Server (NTRS)

    Tumon, Michael; Granat, Robert; Lou, John

    2007-01-01

    A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.

  3. Automated High-Speed Video Detection of Small-Scale Explosives Testing

    NASA Astrophysics Data System (ADS)

    Ford, Robert; Guymon, Clint

    2013-06-01

    Small-scale explosives sensitivity test data is used to evaluate hazards of processing, handling, transportation, and storage of energetic materials. Accurate test data is critical to implementation of engineering and administrative controls for personnel safety and asset protection. Operator mischaracterization of reactions during testing contributes to either excessive or inadequate safety protocols. Use of equipment and associated algorithms to aid the operator in reaction determination can significantly reduce operator error. Safety Management Services, Inc. has developed an algorithm to evaluate high-speed video images of sparks from an ESD (Electrostatic Discharge) machine to automatically determine whether or not a reaction has taken place. The algorithm with the high-speed camera is termed GoDetect (patent pending). An operator assisted version for friction and impact testing has also been developed where software is used to quickly process and store video of sensitivity testing. We have used this method for sensitivity testing with multiple pieces of equipment. We present the fundamentals of GoDetect and compare it to other methods used for reaction detection.

  4. A Study on the Model of Detecting the Variation of Geomagnetic Intensity Based on an Adapted Motion Strategy.

    PubMed

    Li, Hong; Liu, Mingyong; Liu, Kun; Zhang, Feihu

    2017-12-25

    By simulating the geomagnetic fields and analyzing thevariation of intensities, this paper presents a model for calculating the objective function ofan Autonomous Underwater Vehicle (AUV)geomagnetic navigation task. By investigating the biologically inspired strategies, the AUV successfullyreachesthe destination duringgeomagnetic navigation without using the priori geomagnetic map. Similar to the pattern of a flatworm, the proposed algorithm relies on a motion pattern to trigger a local searching strategy by detecting the real-time geomagnetic intensity. An adapted strategy is then implemented, which is biased on the specific target. The results show thereliabilityandeffectivenessofthe proposed algorithm.

  5. B-spline based image tracking by detection

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman

    2016-05-01

    Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.

  6. Improving staff response to seizures on the epilepsy monitoring unit with online EEG seizure detection algorithms.

    PubMed

    Rommens, Nicole; Geertsema, Evelien; Jansen Holleboom, Lisanne; Cox, Fieke; Visser, Gerhard

    2018-05-11

    User safety and the quality of diagnostics on the epilepsy monitoring unit (EMU) depend on reaction to seizures. Online seizure detection might improve this. While good sensitivity and specificity is reported, the added value above staff response is unclear. We ascertained the added value of two electroencephalograph (EEG) seizure detection algorithms in terms of additional detected seizures or faster detection time. EEG-video seizure recordings of people admitted to an EMU over one year were included, with a maximum of two seizures per subject. All recordings were retrospectively analyzed using Encevis EpiScan and BESA Epilepsy. Detection sensitivity and latency of the algorithms were compared to staff responses. False positive rates were estimated on 30 uninterrupted recordings (roughly 24 h per subject) of consecutive subjects admitted to the EMU. EEG-video recordings used included 188 seizures. The response rate of staff was 67%, of Encevis 67%, and of BESA Epilepsy 65%. Of the 62 seizures missed by staff, 66% were recognized by Encevis and 39% by BESA Epilepsy. The median latency was 31 s (staff), 10 s (Encevis), and 14 s (BESA Epilepsy). After correcting for walking time from the observation room to the subject, both algorithms detected faster than staff in 65% of detected seizures. The full recordings included 617 h of EEG. Encevis had a median false positive rate of 4.9 per 24 h and BESA Epilepsy of 2.1 per 24 h. EEG-video seizure detection algorithms may improve reaction to seizures by improving the total number of seizures detected and the speed of detection. The false positive rate is feasible for use in a clinical situation. Implementation of these algorithms might result in faster diagnostic testing and better observation during seizures. Copyright © 2018. Published by Elsevier Inc.

  7. A Robust Step Detection Algorithm and Walking Distance Estimation Based on Daily Wrist Activity Recognition Using a Smart Band.

    PubMed

    Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min

    2018-06-25

    Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.

  8. Syndromic surveillance using veterinary laboratory data: data pre-processing and algorithm performance evaluation

    PubMed Central

    Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Revie, Crawford W.; Sanchez, Javier

    2013-01-01

    Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt–Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel. PMID:23576782

  9. Syndromic surveillance using veterinary laboratory data: data pre-processing and algorithm performance evaluation.

    PubMed

    Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Revie, Crawford W; Sanchez, Javier

    2013-06-06

    Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt-Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel.

  10. Automatic detection of ECG cable interchange by analyzing both morphology and interlead relations.

    PubMed

    Han, Chengzong; Gregg, Richard E; Feild, Dirk Q; Babaeizadeh, Saeed

    2014-01-01

    ECG cable interchange can generate erroneous diagnoses. For algorithms detecting ECG cable interchange, high specificity is required to maintain a low total false positive rate because the prevalence of interchange is low. In this study, we propose and evaluate an improved algorithm for automatic detection and classification of ECG cable interchange. The algorithm was developed by using both ECG morphology information and redundancy information. ECG morphology features included QRS-T and P-wave amplitude, frontal axis and clockwise vector loop rotation. The redundancy features were derived based on the EASI™ lead system transformation. The classification was implemented using linear support vector machine. The development database came from multiple sources including both normal subjects and cardiac patients. An independent database was used to test the algorithm performance. Common cable interchanges were simulated by swapping either limb cables or precordial cables. For the whole validation database, the overall sensitivity and specificity for detecting precordial cable interchange were 56.5% and 99.9%, and the sensitivity and specificity for detecting limb cable interchange (excluding left arm-left leg interchange) were 93.8% and 99.9%. Defining precordial cable interchange or limb cable interchange as a single positive event, the total false positive rate was 0.7%. When the algorithm was designed for higher sensitivity, the sensitivity for detecting precordial cable interchange increased to 74.6% and the total false positive rate increased to 2.7%, while the sensitivity for detecting limb cable interchange was maintained at 93.8%. The low total false positive rate was maintained at 0.6% for the more abnormal subset of the validation database including only hypertrophy and infarction patients. The proposed algorithm can detect and classify ECG cable interchanges with high specificity and low total false positive rate, at the cost of decreased sensitivity for certain precordial cable interchanges. The algorithm could also be configured for higher sensitivity for different applications where a lower specificity can be tolerated. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Detection of small surface defects using DCT based enhancement approach in machine vision systems

    NASA Astrophysics Data System (ADS)

    He, Fuqiang; Wang, Wen; Chen, Zichen

    2005-12-01

    Utilizing DCT based enhancement approach, an improved small defect detection algorithm for real-time leather surface inspection was developed. A two-stage decomposition procedure was proposed to extract an odd-odd frequency matrix after a digital image has been transformed to DCT domain. Then, the reverse cumulative sum algorithm was proposed to detect the transition points of the gentle curves plotted from the odd-odd frequency matrix. The best radius of the cutting sector was computed in terms of the transition points and the high-pass filtering operation was implemented. The filtered image was then inversed and transformed back to the spatial domain. Finally, the restored image was segmented by an entropy method and some defect features are calculated. Experimental results show the proposed small defect detection method can reach the small defect detection rate by 94%.

  12. Optoelectronic hit/miss transform for screening cervical smear slides

    NASA Astrophysics Data System (ADS)

    Narayanswamy, R.; Turner, R. M.; McKnight, D. J.; Johnson, K. M.; Sharpe, J. P.

    1995-06-01

    An optoelectronic morphological processor for detecting regions of interest (abnormal cells) on a cervical smear slide using the hit/miss transform is presented. Computer simulation of the algorithm tested on 184 Pap-smear images provided 95% detection and 5% false alarm. An optoelectronic implementation of the hit/miss transform is presented, along with preliminary experimental results.

  13. A new event detector designed for the Seismic Research Observatories

    USGS Publications Warehouse

    Murdock, James N.; Hutt, Charles R.

    1983-01-01

    A new short-period event detector has been implemented on the Seismic Research Observatories. For each signal detected, a printed output gives estimates of the time of onset of the signal, direction of the first break, quality of onset, period and maximum amplitude of the signal, and an estimate of the variability of the background noise. On the SRO system, the new algorithm runs ~2.5x faster than the former (power level) detector. This increase in speed is due to the design of the algorithm: all operations can be performed by simple shifts, additions, and comparisons (floating point operations are not required). Even though a narrow-band recursive filter is not used, the algorithm appears to detect events competitively with those algorithms that employ such filters. Tests at Albuquerque Seismological Laboratory on data supplied by Blandford suggest performance commensurate with the on-line detector of the Seismic Data Analysis Center, Alexandria, Virginia.

  14. Development of an embedded instrument for autofocus and polarization alignment of polarization maintaining fiber

    NASA Astrophysics Data System (ADS)

    Feng, Di; Fang, Qimeng; Huang, Huaibo; Zhao, Zhengqi; Song, Ningfang

    2017-12-01

    The development and implementation of a practical instrument based on an embedded technique for autofocus and polarization alignment of polarization maintaining fiber is presented. For focusing efficiency and stability, an image-based focusing algorithm fully considering the image definition evaluation and the focusing search strategy was used to accomplish autofocus. For improving the alignment accuracy, various image-based algorithms of alignment detection were developed with high calculation speed and strong robustness. The instrument can be operated as a standalone device with real-time processing and convenience operations. The hardware construction, software interface, and image-based algorithms of main modules are described. Additionally, several image simulation experiments were also carried out to analyze the accuracy of the above alignment detection algorithms. Both the simulation results and experiment results indicate that the instrument can achieve the accuracy of polarization alignment <±0.1 deg.

  15. Novel lidar algorithms for atmospheric slantrange visibility, planetary boundary layer height, meteorogical phenomena and atmospheric layering measurements

    NASA Astrophysics Data System (ADS)

    Pantazis, Alexandros; Papayannis, Alexandros; Georgoussis, Georgios

    2018-04-01

    In this paper we present a development of novel algorithms and techniques implemented within the Laser Remote Sensing Laboratory (LRSL) of the National Technical University of Athens (NTUA), in collaboration with Raymetrics S.A., in order to incorporate them into a 3-Dimensional (3D) lidar. The lidar is transmitting at 355 nm in the eye safe region and the measurements then are transposed to the visual range at 550 nm, according to the World Meteorological Organization (WMO) and the International Civil Aviation Organization (ICAO) rules of daytime visibility. These algorithms are able to provide horizontal, slant and vertical visibility for tower aircraft controllers, meteorologists, but also from pilot's point of view. Other algorithms are also provided for detection of atmospheric layering in any given direction and vertical angle, along with the detection of the Planetary Boundary Layer Height (PBLH).

  16. Hardware accelerator design for change detection in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.

  17. Fast uncooled module 32×32 array of polycrystalline PbSe used for muzzle flash detection

    NASA Astrophysics Data System (ADS)

    Kastek, Mariusz; Dulski, Rafał; Trzaskawka, Piotr; Bieszczad, Grzegorz

    2011-06-01

    The paper presents some aspects of muzzle flash detection using low resolution polycrystalline PbSe uncooled 32×32 detectors array. This system for muzzle flash detection works in MWIR (3 - 5 microns) region and it is based on VPD (Vapor Phase Deposition) technology. The low density uncooled 32×32 array is suitable for being used in low cost IR imagers sensitive in the MWIR band with frame rates exceeding 1.000 Hz. The FPA detector, read-out electronics and processing electronics (allowing the implementation of some algorithms for muzzle flash detection) has been presented. The system has been tested at field test ground. Results of detection range measurement with two types of optical systems (wide and narrow field of view) have been shown. The initial results of testing of some algorithms for muzzle flash detection have been also presented.

  18. Design and Implementation of Real-Time Off-Grid Detection Tool Based on FNET/GridEye

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Jiahui; Zhang, Ye; Liu, Yilu

    2014-01-01

    Real-time situational awareness tools are of critical importance to power system operators, especially during emergencies. The availability of electric power has become a linchpin of most post disaster response efforts as it is the primary dependency for public and private sector services, as well as individuals. Knowledge of the scope and extent of facilities impacted, as well as the duration of their dependence on backup power, enables emergency response officials to plan for contingencies and provide better overall response. Based on real-time data acquired by Frequency Disturbance Recorders (FDRs) deployed in the North American power grid, a real-time detection methodmore » is proposed. This method monitors critical electrical loads and detects the transition of these loads from an on-grid state, where the loads are fed by the power grid to an off-grid state, where the loads are fed by an Uninterrupted Power Supply (UPS) or a backup generation system. The details of the proposed detection algorithm are presented, and some case studies and off-grid detection scenarios are also provided to verify the effectiveness and robustness. Meanwhile, the algorithm has already been implemented based on the Grid Solutions Framework (GSF) and has effectively detected several off-grid situations.« less

  19. Detection and Classification of Objects in Synthetic Aperture Radar Imagery

    DTIC Science & Technology

    2006-02-01

    a higher False Alarm Rate (FAR). Currently, a standard edge detector is the Canny algorithm, which is available with the mathematics package MATLAB ...the algorithm used to calculate the Radon transform. The MATLAB implementation uses the built in Radon transform procedure, which is extremely... MATLAB code for a faster forward-backwards selection process has also been provided. In both cases, the feature selection was accomplished by using

  20. A Novel and Simple Spike Sorting Implementation.

    PubMed

    Petrantonakis, Panagiotis C; Poirazi, Panayiota

    2017-04-01

    Monitoring the activity of multiple, individual neurons that fire spikes in the vicinity of an electrode, namely perform a Spike Sorting (SS) procedure, comprises one of the most important tools for contemporary neuroscience in order to reverse-engineer the brain. As recording electrodes' technology rabidly evolves by integrating thousands of electrodes in a confined spatial setting, the algorithms that are used to monitor individual neurons from recorded signals have to become even more reliable and computationally efficient. In this work, we propose a novel framework of the SS approach in which a single-step processing of the raw (unfiltered) extracellular signal is sufficient for both the detection and sorting of the activity of individual neurons. Despite its simplicity, the proposed approach exhibits comparable performance with state-of-the-art approaches, especially for spike detection in noisy signals, and paves the way for a new family of SS algorithms with the potential for multi-recording, fast, on-chip implementations.

  1. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI).

    PubMed

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-07-07

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting.

  2. Real-time road detection in infrared imagery

    NASA Astrophysics Data System (ADS)

    Andre, Haritini E.; McCoy, Keith

    1990-09-01

    Automatic road detection is an important part in many scene recognition applications. The extraction of roads provides a means of navigation and position update for remotely piloted vehicles or autonomous vehicles. Roads supply strong contextual information which can be used to improve the performance of automatic target recognition (ATh) systems by directing the search for targets and adjusting target classification confidences. This paper will describe algorithmic techniques for labeling roads in high-resolution infrared imagery. In addition, realtime implementation of this structural approach using a processor array based on the Martin Marietta Geometric Arithmetic Parallel Processor (GAPPTh) chip will be addressed. The algorithm described is based on the hypothesis that a road consists of pairs of line segments separated by a distance "d" with opposite gradient directions (antiparallel). The general nature of the algorithm, in addition to its parallel implementation in a single instruction, multiple data (SIMD) machine, are improvements to existing work. The algorithm seeks to identify line segments meeting the road hypothesis in a manner that performs well, even when the side of the road is fragmented due to occlusion or intersections. The use of geometrical relationships between line segments is a powerful yet flexible method of road classification which is independent of orientation. In addition, this approach can be used to nominate other types of objects with minor parametric changes.

  3. Implementation of Tree and Butterfly Barriers with Optimistic Time Management Algorithms for Discrete Event Simulation

    NASA Astrophysics Data System (ADS)

    Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia

    The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.

  4. Detection of Multiple Stationary Humans Using UWB MIMO Radar.

    PubMed

    Liang, Fulai; Qi, Fugui; An, Qiang; Lv, Hao; Chen, Fuming; Li, Zhao; Wang, Jianqi

    2016-11-16

    Remarkable progress has been achieved in the detection of single stationary human. However, restricted by the mutual interference of multiple humans (e.g., strong sidelobes of the torsos and the shadow effect), detection and localization of the multiple stationary humans remains a huge challenge. In this paper, ultra-wideband (UWB) multiple-input and multiple-output (MIMO) radar is exploited to improve the detection performance of multiple stationary humans for its multiple sight angles and high-resolution two-dimensional imaging capacity. A signal model of the vital sign considering both bi-static angles and attitude angle of the human body is firstly developed, and then a novel detection method is proposed to detect and localize multiple stationary humans. In this method, preprocessing is firstly implemented to improve the signal-to-noise ratio (SNR) of the vital signs, and then a vital-sign-enhanced imaging algorithm is presented to suppress the environmental clutters and mutual affection of multiple humans. Finally, an automatic detection algorithm including constant false alarm rate (CFAR), morphological filtering and clustering is implemented to improve the detection performance of weak human targets affected by heavy clutters and shadow effect. The simulation and experimental results show that the proposed method can get a high-quality image of multiple humans and we can use it to discriminate and localize multiple adjacent human targets behind brick walls.

  5. Detection of Multiple Stationary Humans Using UWB MIMO Radar

    PubMed Central

    Liang, Fulai; Qi, Fugui; An, Qiang; Lv, Hao; Chen, Fuming; Li, Zhao; Wang, Jianqi

    2016-01-01

    Remarkable progress has been achieved in the detection of single stationary human. However, restricted by the mutual interference of multiple humans (e.g., strong sidelobes of the torsos and the shadow effect), detection and localization of the multiple stationary humans remains a huge challenge. In this paper, ultra-wideband (UWB) multiple-input and multiple-output (MIMO) radar is exploited to improve the detection performance of multiple stationary humans for its multiple sight angles and high-resolution two-dimensional imaging capacity. A signal model of the vital sign considering both bi-static angles and attitude angle of the human body is firstly developed, and then a novel detection method is proposed to detect and localize multiple stationary humans. In this method, preprocessing is firstly implemented to improve the signal-to-noise ratio (SNR) of the vital signs, and then a vital-sign-enhanced imaging algorithm is presented to suppress the environmental clutters and mutual affection of multiple humans. Finally, an automatic detection algorithm including constant false alarm rate (CFAR), morphological filtering and clustering is implemented to improve the detection performance of weak human targets affected by heavy clutters and shadow effect. The simulation and experimental results show that the proposed method can get a high-quality image of multiple humans and we can use it to discriminate and localize multiple adjacent human targets behind brick walls. PMID:27854356

  6. Photon Counting Using Edge-Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Gin, Jonathan W.; Nguyen, Danh H.; Farr, William H.

    2010-01-01

    New applications such as high-datarate, photon-starved, free-space optical communications require photon counting at flux rates into gigaphoton-per-second regimes coupled with subnanosecond timing accuracy. Current single-photon detectors that are capable of handling such operating conditions are designed in an array format and produce output pulses that span multiple sample times. In order to discern one pulse from another and not to overcount the number of incoming photons, a detection algorithm must be applied to the sampled detector output pulses. As flux rates increase, the ability to implement such a detection algorithm becomes difficult within a digital processor that may reside within a field-programmable gate array (FPGA). Systems have been developed and implemented to both characterize gigahertz bandwidth single-photon detectors, as well as process photon count signals at rates into gigaphotons per second in order to implement communications links at SCPPM (serial concatenated pulse position modulation) encoded data rates exceeding 100 megabits per second with efficiencies greater than two bits per detected photon. A hardware edge-detection algorithm and corresponding signal combining and deserialization hardware were developed to meet these requirements at sample rates up to 10 GHz. The photon discriminator deserializer hardware board accepts four inputs, which allows for the ability to take inputs from a quadphoton counting detector, to support requirements for optical tracking with a reduced number of hardware components. The four inputs are hardware leading-edge detected independently. After leading-edge detection, the resultant samples are ORed together prior to deserialization. The deserialization is performed to reduce the rate at which data is passed to a digital signal processor, perhaps residing within an FPGA. The hardware implements four separate analog inputs that are connected through RF connectors. Each analog input is fed to a high-speed 1-bit comparator, which digitizes the input referenced to an adjustable threshold value. This results in four independent serial sample streams of binary 1s and 0s, which are ORed together at rates up to 10 GHz. This single serial stream is then deserialized by a factor of 16 to create 16 signal lines at a rate of 622.5 MHz or lower for input to a high-speed digital processor assembly. The new design and corresponding hardware can be employed with a quad-photon counting detector capable of handling photon rates on the order of multi-gigaphotons per second, whereas prior art was only capable of handling a single input at 1/4 the flux rate. Additionally, the hardware edge-detection algorithm has provided the ability to process 3-10 higher photon flux rates than previously possible by removing the limitation that photoncounting detector output pulses on multiple channels being ORed not overlap. Now, only the leading edges of the pulses are required to not overlap. This new photon counting digitizer hardware architecture supports a universal front end for an optical communications receiver operating at data rates from kilobits to over one gigabit per second to meet increased mission data volume requirements.

  7. Prefiltering Model for Homology Detection Algorithms on GPU.

    PubMed

    Retamosa, Germán; de Pedro, Luis; González, Ivan; Tamames, Javier

    2016-01-01

    Homology detection has evolved over the time from heavy algorithms based on dynamic programming approaches to lightweight alternatives based on different heuristic models. However, the main problem with these algorithms is that they use complex statistical models, which makes it difficult to achieve a relevant speedup and find exact matches with the original results. Thus, their acceleration is essential. The aim of this article was to prefilter a sequence database. To make this work, we have implemented a groundbreaking heuristic model based on NVIDIA's graphics processing units (GPUs) and multicore processors. Depending on the sensitivity settings, this makes it possible to quickly reduce the sequence database by factors between 50% and 95%, while rejecting no significant sequences. Furthermore, this prefiltering application can be used together with multiple homology detection algorithms as a part of a next-generation sequencing system. Extensive performance and accuracy tests have been carried out in the Spanish National Centre for Biotechnology (NCB). The results show that GPU hardware can accelerate the execution times of former homology detection applications, such as National Centre for Biotechnology Information (NCBI), Basic Local Alignment Search Tool for Proteins (BLASTP), up to a factor of 4.

  8. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    PubMed

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  9. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  10. On the robustness of EC-PC spike detection method for online neural recording.

    PubMed

    Zhou, Yin; Wu, Tong; Rastegarnia, Amir; Guan, Cuntai; Keefer, Edward; Yang, Zhi

    2014-09-30

    Online spike detection is an important step to compress neural data and perform real-time neural information decoding. An unsupervised, automatic, yet robust signal processing is strongly desired, thus it can support a wide range of applications. We have developed a novel spike detection algorithm called "exponential component-polynomial component" (EC-PC) spike detection. We firstly evaluate the robustness of the EC-PC spike detector under different firing rates and SNRs. Secondly, we show that the detection Precision can be quantitatively derived without requiring additional user input parameters. We have realized the algorithm (including training) into a 0.13 μm CMOS chip, where an unsupervised, nonparametric operation has been demonstrated. Both simulated data and real data are used to evaluate the method under different firing rates (FRs), SNRs. The results show that the EC-PC spike detector is the most robust in comparison with some popular detectors. Moreover, the EC-PC detector can track changes in the background noise due to the ability to re-estimate the neural data distribution. Both real and synthesized data have been used for testing the proposed algorithm in comparison with other methods, including the absolute thresholding detector (AT), median absolute deviation detector (MAD), nonlinear energy operator detector (NEO), and continuous wavelet detector (CWD). Comparative testing results reveals that the EP-PC detection algorithm performs better than the other algorithms regardless of recording conditions. The EC-PC spike detector can be considered as an unsupervised and robust online spike detection. It is also suitable for hardware implementation. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.

  12. Decision support methods for the detection of adverse events in post-marketing data.

    PubMed

    Hauben, M; Bate, A

    2009-04-01

    Spontaneous reporting is a crucial component of post-marketing drug safety surveillance despite its significant limitations. The size and complexity of some spontaneous reporting system databases represent a challenge for drug safety professionals who traditionally have relied heavily on the scientific and clinical acumen of the prepared mind. Computer algorithms that calculate statistical measures of reporting frequency for huge numbers of drug-event combinations are increasingly used to support pharamcovigilance analysts screening large spontaneous reporting system databases. After an overview of pharmacovigilance and spontaneous reporting systems, we discuss the theory and application of contemporary computer algorithms in regular use, those under development, and the practical considerations involved in the implementation of computer algorithms within a comprehensive and holistic drug safety signal detection program.

  13. Challenges and Recent Developments in Hearing Aids: Part I. Speech Understanding in Noise, Microphone Technologies and Noise Reduction Algorithms

    PubMed Central

    Chung, King

    2004-01-01

    This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225

  14. Early Breast Cancer Diagnosis Using Microwave Imaging via Space-Frequency Algorithm

    NASA Astrophysics Data System (ADS)

    Vemulapalli, Spandana

    The conventional breast cancer detection methods have limitations ranging from ionizing radiations, low specificity to high cost. These limitations make way for a suitable alternative called Microwave Imaging, as a screening technique in the detection of breast cancer. The discernible differences between the benign, malignant and healthy breast tissues and the ability to overcome the harmful effects of ionizing radiations make microwave imaging, a feasible breast cancer detection technique. Earlier studies have shown the variation of electrical properties of healthy and malignant tissues as a function of frequency and hence stimulates high bandwidth requirement. A Ultrawideband, Wideband and Narrowband arrays have been designed, simulated and optimized for high (44%), medium (33%) and low (7%) bandwidths respectively, using the EM (electromagnetic software) called FEKO. These arrays are then used to illuminate the breast model (phantom) and the received backscattered signals are obtained in the near field for each case. The Microwave Imaging via Space-Time (MIST) beamforming algorithm in the frequency domain, is next applied to these near field backscattered monostatic frequency response signals for the image reconstruction of the breast model. The main purpose of this investigation is to access the impact of bandwidth and implement a novel imaging technique for use in the early detection of breast cancer. Earlier studies show the implementation of the MIST imaging algorithm on the time domain signals via a frequency domain beamformer. The performance evaluation of the imaging algorithm on the frequency response signals has been carried out in the frequency domain. The energy profile of the breast in the spatial domain is created via the frequency domain Parseval's theorem. The beamformer weights calculated using these the MIST algorithm (not including the effect of the skin) has been calculated for Ultrawideband, Wideband and Narrowband arrays, respectively. Quality metrics such as dynamic range, radiometric resolution etc. are also evaluated for all the three types of arrays.

  15. Detecting livestock production zones.

    PubMed

    Grisi-Filho, J H H; Amaku, M; Ferreira, F; Dias, R A; Neto, J S Ferreira; Negreiros, R L; Ossada, R

    2013-07-01

    Communities are sets of nodes that are related in an important way, most likely sharing common properties and/or playing similar roles within a network. Unraveling a network structure, and hence the trade preferences and pathways, could be useful to a researcher or a decision maker. We implemented a community detection algorithm to find livestock communities, which is consistent with the definition of a livestock production zone, assuming that a community is a group of farm premises in which an animal is more likely to stay during its lifetime than expected by chance. We applied this algorithm to the network of animal movements within the state of Mato Grosso for 2007. This database holds information concerning 87,899 premises and 521,431 movements throughout the year, totaling 15,844,779 animals moved. The community detection algorithm achieved a network partition that shows a clear geographical and commercial pattern, two crucial features for preventive veterinary medicine applications; this algorithm provides also a meaningful interpretation to trade networks where links emerge based on trader node choices. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Automatic detection and classification of artifacts in single-channel EEG.

    PubMed

    Olund, Thomas; Duun-Henriksen, Jonas; Kjaer, Troels W; Sorensen, Helge B D

    2014-01-01

    Ambulatory EEG monitoring can provide medical doctors important diagnostic information, without hospitalizing the patient. These recordings are however more exposed to noise and artifacts compared to clinically recorded EEG. An automatic artifact detection and classification algorithm for single-channel EEG is proposed to help identifying these artifacts. Features are extracted from the EEG signal and wavelet subbands. Subsequently a selection algorithm is applied in order to identify the best discriminating features. A non-linear support vector machine is used to discriminate among different artifact classes using the selected features. Single-channel (Fp1-F7) EEG recordings are obtained from experiments with 12 healthy subjects performing artifact inducing movements. The dataset was used to construct and validate the model. Both subject-specific and generic implementation, are investigated. The detection algorithm yield an average sensitivity and specificity above 95% for both the subject-specific and generic models. The classification algorithm show a mean accuracy of 78 and 64% for the subject-specific and generic model, respectively. The classification model was additionally validated on a reference dataset with similar results.

  17. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a success rate of 90%. The videos used in this experiment were acquired by seven video cameras installed in São Bernardo do Campo, Brazil, that continuously recorded lightning events during the summer. The cameras were disposed in a 360 loop, recording all data at a time resolution of 33ms. During this period, several convective storms were recorded.

  18. An Algorithm for Real-Time Pulse Waveform Segmentation and Artifact Detection in Photoplethysmograms.

    PubMed

    Fischer, Christoph; Domer, Benno; Wibmer, Thomas; Penzel, Thomas

    2017-03-01

    Photoplethysmography has been used in a wide range of medical devices for measuring oxygen saturation, cardiac output, assessing autonomic function, and detecting peripheral vascular disease. Artifacts can render the photoplethysmogram (PPG) useless. Thus, algorithms capable of identifying artifacts are critically important. However, the published PPG algorithms are limited in algorithm and study design. Therefore, the authors developed a novel embedded algorithm for real-time pulse waveform (PWF) segmentation and artifact detection based on a contour analysis in the time domain. This paper provides an overview about PWF and artifact classifications, presents the developed PWF analysis, and demonstrates the implementation on a 32-bit ARM core microcontroller. The PWF analysis was validated with data records from 63 subjects acquired in a sleep laboratory, ergometry laboratory, and intensive care unit in equal parts. The output of the algorithm was compared with harmonized experts' annotations of the PPG with a total duration of 31.5 h. The algorithm achieved a beat-to-beat comparison sensitivity of 99.6%, specificity of 90.5%, precision of 98.5%, and accuracy of 98.3%. The interrater agreement expressed as Cohen's kappa coefficient was 0.927 and as F-measure was 0.990. In conclusion, the PWF analysis seems to be a suitable method for PPG signal quality determination, real-time annotation, data compression, and calculation of additional pulse wave metrics such as amplitude, duration, and rise time.

  19. A chest-shape target automatic detection method based on Deformable Part Models

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  20. Artificial intelligence techniques for ground test monitoring of rocket engines

    NASA Technical Reports Server (NTRS)

    Ali, Moonis; Gupta, U. K.

    1990-01-01

    An expert system is being developed which can detect anomalies in Space Shuttle Main Engine (SSME) sensor data significantly earlier than the redline algorithm currently in use. The training of such an expert system focuses on two approaches which are based on low frequency and high frequency analyses of sensor data. Both approaches are being tested on data from SSME tests and their results compared with the findings of NASA and Rocketdyne experts. Prototype implementations have detected the presence of anomalies earlier than the redline algorithms that are in use currently. It therefore appears that these approaches have the potential of detecting anomalies early eneough to shut down the engine or take other corrective action before severe damage to the engine occurs.

  1. Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves

    PubMed Central

    Elgendi, Mohamed; Eskofier, Bjoern; Abbott, Derek

    2015-01-01

    Background There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Methods Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). Results The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). Conclusions We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design. PMID:26197321

  2. iTrack: instrumented mobile electrooculography (EOG) eye-tracking in older adults and Parkinson's disease.

    PubMed

    Stuart, Samuel; Hickey, Aodhán; Galna, Brook; Lord, Sue; Rochester, Lynn; Godfrey, Alan

    2017-01-01

    Detection of saccades (fast eye-movements) within raw mobile electrooculography (EOG) data involves complex algorithms which typically process data acquired during seated static tasks only. Processing of data during dynamic tasks such as walking is relatively rare and complex, particularly in older adults or people with Parkinson's disease (PD). Development of algorithms that can be easily implemented to detect saccades is required. This study aimed to develop an algorithm for the detection and measurement of saccades in EOG data during static (sitting) and dynamic (walking) tasks, in older adults and PD. Eye-tracking via mobile EOG and infra-red (IR) eye-tracker (with video) was performed with a group of older adults (n  =  10) and PD participants (n  =  10) (⩾50 years). Horizontal saccades made between targets set 5°, 10° and 15° apart were first measured while seated. Horizontal saccades were then measured while a participant walked and executed a 40° turn left and right. The EOG algorithm was evaluated by comparing the number of correct saccade detections and agreement (ICC 2,1 ) between output from visual inspection of eye-tracker videos and IR eye-tracker. The EOG algorithm detected 75-92% of saccades compared to video inspection and IR output during static testing, with fair to excellent agreement (ICC 2,1 0.49-0.93). However, during walking EOG saccade detection reduced to 42-88% compared to video inspection or IR output, with poor to excellent (ICC 2,1 0.13-0.88) agreement between methodologies. The algorithm was robust during seated testing but less so during walking, which was likely due to increased measurement and analysis error with a dynamic task. Future studies may consider a combination of EOG and IR for comprehensive measurement.

  3. Bridging the gap between real-life data and simulated data by providing a highly realistic fall dataset for evaluating camera-based fall detection algorithms.

    PubMed

    Baldewijns, Greet; Debard, Glen; Mertes, Gert; Vanrumste, Bart; Croonenborghs, Tom

    2016-03-01

    Fall incidents are an important health hazard for older adults. Automatic fall detection systems can reduce the consequences of a fall incident by assuring that timely aid is given. The development of these systems is therefore getting a lot of research attention. Real-life data which can help evaluate the results of this research is however sparse. Moreover, research groups that have this type of data are not at liberty to share it. Most research groups thus use simulated datasets. These simulation datasets, however, often do not incorporate the challenges the fall detection system will face when implemented in real-life. In this Letter, a more realistic simulation dataset is presented to fill this gap between real-life data and currently available datasets. It was recorded while re-enacting real-life falls recorded during previous studies. It incorporates the challenges faced by fall detection algorithms in real life. A fall detection algorithm from Debard et al. was evaluated on this dataset. This evaluation showed that the dataset possesses extra challenges compared with other publicly available datasets. In this Letter, the dataset is discussed as well as the results of this preliminary evaluation of the fall detection algorithm. The dataset can be downloaded from www.kuleuven.be/advise/datasets.

  4. Morphological feature detection for cervical cancer screening

    NASA Astrophysics Data System (ADS)

    Narayanswamy, Ramkumar; Sharpe, John P.; Duke, Heather J.; Stewart, Rosemary J.; Johnson, Kristina M.

    1995-03-01

    An optoelectronic system has been designed to pre-screen pap-smear slides and detect the suspicious cells using the hit/miss transform. Computer simulation of the algorithm tested on 184 pap-smear images detected 95% of the suspicious region as suspect while tagging just 5% of the normal regions as suspect. An optoelectronic implementation of the hit/miss transform using a 4f Vander-Lugt correlator architecture is proposed and demonstrated with experimental results.

  5. Design-for-Hardware-Trust Techniques, Detection Strategies and Metrics for Hardware Trojans

    DTIC Science & Technology

    2015-12-14

    down  both  rising  and  falling  transitions.  For  Trojan   detection ,   one   fault ,   slow-­‐to-­‐rise  or   slow-­‐to...in Jan. 2016. Through the course of this project we developed novel hardware Trojan detection techniques based on clock sweeping. The technique takes...algorithms to detect minor changes due to Trojan and compared them with those changes made by process variations. This technique was implemented on

  6. DCL System Using Deep Learning Approaches for Land-Based or Ship-Based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2013-09-30

    method has been successfully implemented to automatically detect and recognize pulse trains from minke whales ( songs ) and sperm whales (Physeter...workshops, conferences and data challenges 2. Enhancements of the ASR algorithm for frequency-modulated sounds: Right Whale Study 3...Enhancements of the ASR algorithm for pulse trains: Minke Whale Study 4. Mining Big Data Sound Archives using High Performance Computing software and hardware

  7. Implementation of Wi-Fi Signal Sampling on an Android Smartphone for Indoor Positioning Systems.

    PubMed

    Liu, Hung-Huan; Liu, Chun

    2017-12-21

    Collecting and maintaining radio fingerprint for wireless indoor positioning systems involves considerable time and labor. We have proposed the quick radio fingerprint collection (QRFC) algorithm which employed the built-in accelerometer of Android smartphones to implement step detection in order to assist in collecting radio fingerprints. In the present study, we divided the algorithm into moving sampling (MS) and stepped MS (SMS), and describe the implementation of both algorithms and their comparison. Technical details and common errors concerning the use of Android smartphones to collect Wi-Fi radio beacons were surveyed and discussed. The results of signal sampling experiments performed in a hallway measuring 54 m in length showed that in terms of the amount of time required to complete collection of access point (AP) signals, static sampling (SS; a traditional procedure for collecting Wi-Fi signals) took at least 2 h, whereas MS and SMS took approximately 150 and 300 s, respectively. Notably, AP signals obtained through MS and SMS were comparable to those obtained through SS in terms of the distribution of received signal strength indicator (RSSI) and positioning accuracy. Therefore, MS and SMS are recommended instead of SS as signal sampling procedures for indoor positioning algorithms.

  8. Implementation of Wi-Fi Signal Sampling on an Android Smartphone for Indoor Positioning Systems

    PubMed Central

    Liu, Chun

    2017-01-01

    Collecting and maintaining radio fingerprint for wireless indoor positioning systems involves considerable time and labor. We have proposed the quick radio fingerprint collection (QRFC) algorithm which employed the built-in accelerometer of Android smartphones to implement step detection in order to assist in collecting radio fingerprints. In the present study, we divided the algorithm into moving sampling (MS) and stepped MS (SMS), and describe the implementation of both algorithms and their comparison. Technical details and common errors concerning the use of Android smartphones to collect Wi-Fi radio beacons were surveyed and discussed. The results of signal sampling experiments performed in a hallway measuring 54 m in length showed that in terms of the amount of time required to complete collection of access point (AP) signals, static sampling (SS; a traditional procedure for collecting Wi-Fi signals) took at least 2 h, whereas MS and SMS took approximately 150 and 300 s, respectively. Notably, AP signals obtained through MS and SMS were comparable to those obtained through SS in terms of the distribution of received signal strength indicator (RSSI) and positioning accuracy. Therefore, MS and SMS are recommended instead of SS as signal sampling procedures for indoor positioning algorithms. PMID:29267234

  9. Delamination Detection Using Guided Wave Phased Arrays

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Yu, Lingyu; Leckey, Cara

    2016-01-01

    This paper presents a method for detecting multiple delaminations in composite laminates using non-contact phased arrays. The phased arrays are implemented with a non-contact scanning laser Doppler vibrometer (SLDV). The array imaging algorithm is performed in the frequency domain where both the guided wave dispersion effect and direction dependent wave properties are considered. By using the non-contact SLDV array with a frequency domain imaging algorithm, an intensity image of the composite plate can be generated for delamination detection. For the proof of concept, a laboratory test is performed using a non-contact phased array to detect two delaminations (created through quasi-static impact test) at different locations in a composite plate. Using the non-contact phased array and frequency domain imaging, the two impact-induced delaminations are successfully detected. This study shows that the non-contact phased array method is a potentially effective method for rapid delamination inspection in large composite structures.

  10. Adaptive Gaussian mixture models for pre-screening in GPR data

    NASA Astrophysics Data System (ADS)

    Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.

    2011-06-01

    Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.

  11. A new methodology for automatic detection of reference points in 3D cephalometry: A pilot study.

    PubMed

    Ed-Dhahraouy, Mohammed; Riri, Hicham; Ezzahmouly, Manal; Bourzgui, Farid; El Moutaoukkil, Abdelmajid

    2018-04-05

    The aim of this study was to develop a new method for an automatic detection of reference points in 3D cephalometry to overcome the limits of 2D cephalometric analyses. A specific application was designed using the C++ language for automatic and manual identification of 21 (reference) points on the craniofacial structures. Our algorithm is based on the implementation of an anatomical and geometrical network adapted to the craniofacial structure. This network was constructed based on the anatomical knowledge of the 3D cephalometric (reference) points. The proposed algorithm was tested on five CBCT images. The proposed approach for the automatic 3D cephalometric identification was able to detect 21 points with a mean error of 2.32mm. In this pilot study, we propose an automated methodology for the identification of the 3D cephalometric (reference) points. A larger sample will be implemented in the future to assess the method validity and reliability. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.

  12. Impact of MPEG-4 3D mesh coding on watermarking algorithms for polygonal 3D meshes

    NASA Astrophysics Data System (ADS)

    Funk, Wolfgang

    2004-06-01

    The MPEG-4 multimedia standard addresses the scene-based composition of audiovisual objects. Natural and synthetic multimedia content can be mixed and transmitted over narrow and broadband communication channels. Synthetic natural hybrid coding (SNHC) within MPEG-4 provides tools for 3D mesh coding (3DMC). We investigate the robustness of two different 3D watermarking algorithms for polygonal meshes with respect to 3DMC. The first algorithm is a blind detection scheme designed for labelling applications that require high bandwidth and low robustness. The second algorithm is a robust non-blind one-bit watermarking scheme intended for copyright protection applications. Both algorithms have been proposed by Benedens. We expect 3DMC to have an impact on the watermarked 3D meshes, as the algorithms used for our simulations work on vertex coordinates to encode the watermark. We use the 3DMC implementation provided with the MPEG-4 reference software and the Princeton Shape Benchmark model database for our simulations. The watermarked models are sent through the 3DMC encoder and decoder, and the watermark decoding process is performed. For each algorithm under consideration we examine the detection properties as a function of the quantization of the vertex coordinates.

  13. Improved detection of soma location and morphology in fluorescence microscopy images of neurons.

    PubMed

    Kayasandik, Cihan Bilge; Labate, Demetrio

    2016-12-01

    Automated detection and segmentation of somas in fluorescent images of neurons is a major goal in quantitative studies of neuronal networks, including applications of high-content-screenings where it is required to quantify multiple morphological properties of neurons. Despite recent advances in image processing targeted to neurobiological applications, existing algorithms of soma detection are often unreliable, especially when processing fluorescence image stacks of neuronal cultures. In this paper, we introduce an innovative algorithm for the detection and extraction of somas in fluorescent images of networks of cultured neurons where somas and other structures exist in the same fluorescent channel. Our method relies on a new geometrical descriptor called Directional Ratio and a collection of multiscale orientable filters to quantify the level of local isotropy in an image. To optimize the application of this approach, we introduce a new construction of multiscale anisotropic filters that is implemented by separable convolution. Extensive numerical experiments using 2D and 3D confocal images show that our automated algorithm reliably detects somas, accurately segments them, and separates contiguous ones. We include a detailed comparison with state-of-the-art existing methods to demonstrate that our algorithm is extremely competitive in terms of accuracy, reliability and computational efficiency. Our algorithm will facilitate the development of automated platforms for high content neuron image processing. A Matlab code is released open-source and freely available to the scientific community. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Software-Implemented Fault Tolerance in Communications Systems

    NASA Technical Reports Server (NTRS)

    Gantenbein, Rex E.

    1994-01-01

    Software-implemented fault tolerance (SIFT) is used in many computer-based command, control, and communications (C(3)) systems to provide the nearly continuous availability that they require. In the communications subsystem of Space Station Alpha, SIFT algorithms are used to detect and recover from failures in the data and command link between the Station and its ground support. The paper presents a review of these algorithms and discusses how such techniques can be applied to similar systems found in applications such as manufacturing control, military communications, and programmable devices such as pacemakers. With support from the Tracking and Communication Division of NASA's Johnson Space Center, researchers at the University of Wyoming are developing a testbed for evaluating the effectiveness of these algorithms prior to their deployment. This testbed will be capable of simulating a variety of C(3) system failures and recording the response of the Space Station SIFT algorithms to these failures. The design of this testbed and the applicability of the approach in other environments is described.

  15. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor

    PubMed Central

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-01-01

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714

  16. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.

    PubMed

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-12-15

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.

  17. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  18. Towards a Systematic Screening Tool for Quality Assurance and Semiautomatic Fraud Detection for Images in the Life Sciences.

    PubMed

    Koppers, Lars; Wormer, Holger; Ickstadt, Katja

    2017-08-01

    The quality and authenticity of images is essential for data presentation, especially in the life sciences. Questionable images may often be a first indicator for questionable results, too. Therefore, a tool that uses mathematical methods to detect suspicious images in large image archives can be a helpful instrument to improve quality assurance in publications. As a first step towards a systematic screening tool, especially for journal editors and other staff members who are responsible for quality assurance, such as laboratory supervisors, we propose a basic classification of image manipulation. Based on this classification, we developed and explored some simple algorithms to detect copied areas in images. Using an artificial image and two examples of previously published modified images, we apply quantitative methods such as pixel-wise comparison, a nearest neighbor and a variance algorithm to detect copied-and-pasted areas or duplicated images. We show that our algorithms are able to detect some simple types of image alteration, such as copying and pasting background areas. The variance algorithm detects not only identical, but also very similar areas that differ only by brightness. Further types could, in principle, be implemented in a standardized scanning routine. We detected the copied areas in a proven case of image manipulation in Germany and showed the similarity of two images in a retracted paper from the Kato labs, which has been widely discussed on sites such as pubpeer and retraction watch.

  19. On detection and visualization techniques for cyber security situation awareness

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Wei, Shixiao; Shen, Dan; Blowers, Misty; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe; Zhang, Hanlin; Lu, Chao

    2013-05-01

    Networking technologies are exponentially increasing to meet worldwide communication requirements. The rapid growth of network technologies and perversity of communications pose serious security issues. In this paper, we aim to developing an integrated network defense system with situation awareness capabilities to present the useful information for human analysts. In particular, we implement a prototypical system that includes both the distributed passive and active network sensors and traffic visualization features, such as 1D, 2D and 3D based network traffic displays. To effectively detect attacks, we also implement algorithms to transform real-world data of IP addresses into images and study the pattern of attacks and use both the discrete wavelet transform (DWT) based scheme and the statistical based scheme to detect attacks. Through an extensive simulation study, our data validate the effectiveness of our implemented defense system.

  20. Electric machine differential for vehicle traction control and stability control

    NASA Astrophysics Data System (ADS)

    Kuruppu, Sandun Shivantha

    Evolving requirements in energy efficiency and tightening regulations for reliable electric drivetrains drive the advancement of the hybrid electric (HEV) and full electric vehicle (EV) technology. Different configurations of EV and HEV architectures are evaluated for their performance. The future technology is trending towards utilizing distinctive properties in electric machines to not only to improve efficiency but also to realize advanced road adhesion controls and vehicle stability controls. Electric machine differential (EMD) is such a concept under current investigation for applications in the near future. Reliability of a power train is critical. Therefore, sophisticated fault detection schemes are essential in guaranteeing reliable operation of a complex system such as an EMD. The research presented here emphasize on implementation of a 4kW electric machine differential, a novel single open phase fault diagnostic scheme, an implementation of a real time slip optimization algorithm and an electric machine differential based yaw stability improvement study. The proposed d-q current signature based SPO fault diagnostic algorithm detects the fault within one electrical cycle. The EMD based extremum seeking slip optimization algorithm reduces stopping distance by 30% compared to hydraulic braking based ABS.

  1. Ensembles of radial basis function networks for spectroscopic detection of cervical precancer

    NASA Technical Reports Server (NTRS)

    Tumer, K.; Ramanujam, N.; Ghosh, J.; Richards-Kortum, R.

    1998-01-01

    The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, noninvasively and quantitatively probes the biochemical and morphological changes that occur in precancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337-, 380-, and 460-nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from precancerous tissue samples. The use of connectionist methods such as multilayered perceptrons, radial basis function (RBF) networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated and near real-time implementation of precancer detection in the hands of nonexperts. The results are more reliable, direct, and accurate than those achieved by either human experts or multivariate statistical algorithms.

  2. Implementation of a Real-Time Stacking Algorithm in a Photogrammetric Digital Camera for Uavs

    NASA Astrophysics Data System (ADS)

    Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C.

    2017-08-01

    In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn't seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.

  3. Simulation of an enhanced TCAS 2 system in operation

    NASA Technical Reports Server (NTRS)

    Rojas, R. G.; Law, P.; Burnside, W. D.

    1987-01-01

    Described is a computer simulation of a Boeing 737 aircraft equipped with an enhanced Traffic and Collision Avoidance System (TCAS II). In particular, an algorithm is developed which permits the computer simulation of the tracking of a target airplane by a Boeing 373 which has a TCAS II array mounted on top of its fuselage. This algorithm has four main components: namely, the target path, the noise source, the alpha-beta filter, and threat detection. The implementation of each of these four components is described. Furthermore, the areas where the present algorithm needs to be improved are also mentioned.

  4. A new clustering strategy

    NASA Astrophysics Data System (ADS)

    Feng, Jian-xin; Tang, Jia-fu; Wang, Guang-xing

    2007-04-01

    On the basis of the analysis of clustering algorithm that had been proposed for MANET, a novel clustering strategy was proposed in this paper. With the trust defined by statistical hypothesis in probability theory and the cluster head selected by node trust and node mobility, this strategy can realize the function of the malicious nodes detection which was neglected by other clustering algorithms and overcome the deficiency of being incapable of implementing the relative mobility metric of corresponding nodes in the MOBIC algorithm caused by the fact that the receiving power of two consecutive HELLO packet cannot be measured. It's an effective solution to cluster MANET securely.

  5. Implementation of a Smart Phone for Motion Analysis.

    PubMed

    Yodpijit, Nantakrit; Songwongamarit, Chalida; Tavichaiyuth, Nicha

    2015-01-01

    In today’s information-rich environment, one of the most popular devices is a smartphone. Research has shown significant growth in the use of smartphones and apps all over the world. Accelerometer within smartphone is a motion sensor that can be used to detect human movements. Compared to other major vital signs, gait characteristics represent general health status, and can be determined using smartphones. The objective of the current study is to design and develop the alternative technology that can potentially predict health status and reduce healthcare cost. This study uses a smartphone as a wireless accelerometer for quantifying human motion characteristics from four steps of the system design and development (data acquisition operation, feature extraction algorithm, classifier design, and decision making strategy). Findings indicate that it is possible to extract features from a smartphone’s accelerometer using a peak detection algorithm. Gait characteristics obtain from the peak detection algorithm include stride time, stance time, swing time and cadence. Applications and limitations of this study are also discussed.

  6. Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF

    NASA Astrophysics Data System (ADS)

    Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James

    A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.

  7. Robust approximation of image illumination direction in a segmentation-based crater detection algorithm for spacecraft navigation

    NASA Astrophysics Data System (ADS)

    Maass, Bolko

    2016-12-01

    This paper describes an efficient and easily implemented algorithmic approach to extracting an approximation to an image's dominant projected illumination direction, based on intermediary results from a segmentation-based crater detection algorithm (CDA), at a computational cost that is negligible in comparison to that of the prior stages of the CDA. Most contemporary CDAs built for spacecraft navigation use this illumination direction as a means of improving performance or even require it to function at all. Deducing the illumination vector from the image alone reduces the reliance on external information such as the accurate knowledge of the spacecraft inertial state, accurate time base and solar system ephemerides. Therefore, a method such as the one described in this paper is a prerequisite for true "Lost in Space" operation of a purely segmentation-based crater detecting and matching method for spacecraft navigation. The proposed method is verified using ray-traced lunar elevation model data, asteroid image data, and in a laboratory setting with a camera in the loop.

  8. Automatic detection of muscle activity from mechanomyogram signals: a comparison of amplitude and wavelet-based methods.

    PubMed

    Alves, Natasha; Chau, Tom

    2010-04-01

    Knowledge of muscle activity timing is critical to many clinical applications, such as the assessment of muscle coordination and the prescription of muscle-activated switches for individuals with disabilities. In this study, we introduce a continuous wavelet transform (CWT) algorithm for the detection of muscle activity via mechanomyogram (MMG) signals. CWT coefficients of the MMG signal were compared to scale-specific thresholds derived from the baseline signal to estimate the timing of muscle activity. Test signals were recorded from the flexor carpi radialis muscles of 15 able-bodied participants as they squeezed and released a hand dynamometer. Using the dynamometer signal as a reference, the proposed CWT detection algorithm was compared against a global-threshold CWT detector as well as amplitude-based event detection for sensitivity and specificity to voluntary contractions. The scale-specific CWT-based algorithm exhibited superior detection performance over the other detectors. CWT detection also showed good muscle selectivity during hand movement, particularly when a given muscle was the primary facilitator of the contraction. This may suggest that, during contraction, the compound MMG signal has a recurring morphological pattern that is not prevalent in the baseline signal. The ability of CWT analysis to be implemented in real time makes it a candidate for muscle-activity detection in clinical applications.

  9. Automatic detection and classification of damage zone(s) for incorporating in digital image correlation technique

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Sudipta; Deb, Debasis

    2016-07-01

    Digital image correlation (DIC) is a technique developed for monitoring surface deformation/displacement of an object under loading conditions. This method is further refined to make it capable of handling discontinuities on the surface of the sample. A damage zone is referred to a surface area fractured and opened in due course of loading. In this study, an algorithm is presented to automatically detect multiple damage zones in deformed image. The algorithm identifies the pixels located inside these zones and eliminate them from FEM-DIC processes. The proposed algorithm is successfully implemented on several damaged samples to estimate displacement fields of an object under loading conditions. This study shows that displacement fields represent the damage conditions reasonably well as compared to regular FEM-DIC technique without considering the damage zones.

  10. Design of an FMCW radar baseband signal processing system for automotive application.

    PubMed

    Lin, Jau-Jr; Li, Yuan-Ping; Hsu, Wei-Chiang; Lee, Ta-Sung

    2016-01-01

    For a typical FMCW automotive radar system, a new design of baseband signal processing architecture and algorithms is proposed to overcome the ghost targets and overlapping problems in the multi-target detection scenario. To satisfy the short measurement time constraint without increasing the RF front-end loading, a three-segment waveform with different slopes is utilized. By introducing a new pairing mechanism and a spatial filter design algorithm, the proposed detection architecture not only provides high accuracy and reliability, but also requires low pairing time and computational loading. This proposed baseband signal processing architecture and algorithms balance the performance and complexity, and are suitable to be implemented in a real automotive radar system. Field measurement results demonstrate that the proposed automotive radar signal processing system can perform well in a realistic application scenario.

  11. Multi-site implementation of nutrition screening and diagnosis in medical care units: Success of the More-2-Eat project.

    PubMed

    Keller, Heather H; Valaitis, Renata; Laur, Celia V; McNicholl, Tara; Xu, Yingying; Dubin, Joel A; Curtis, Lori; Obiorah, Suzanne; Ray, Sumantra; Bernier, Paule; Gramlich, Leah; Stickles-White, Marilee; Laporte, Manon; Bell, Jack

    2018-03-22

    Improving the detection and treatment of malnourished patients in hospital is needed to promote recovery. To describe the change in rates of detection and triaging of care for malnourished patients in 5 hospitals that were implementing an evidence-based nutrition care algorithm. To demonstrate that following this algorithm leads to increased detection of malnutrition and increased treatment to mitigate this condition. Sites worked towards implementing the Integrated Nutrition Pathway for Acute Care (INPAC), including screening (Canadian Nutrition Screening Tool) and triage (Subjective Global Assessment; SGA) to detect and diagnose malnourished patients. Implementation occurred over a 24-month period, including developmental (Period 1), implementation (Periods 2-5), and sustainability (Period 6) phases. Audits (n = 36) of patient health records (n = 5030) were conducted to identify nutrition care practices implemented with a variety of strategies and behaviour change techniques. All sites increased nutrition screening from Period 1, with three achieving the goal of 75% of admitted patients being screened by Period 3, and the remainder achieving a rate of 70% by end of implementation. No sites were conducting SGA at Period 1, and sites reached the goal of a 75% completion rate or referral for those identified to be at nutrition risk, by Period 3 or 4. By Period 2, 100% of patients identified as SGA C (severely malnourished) were receiving a comprehensive nutritional assessment. In Period 1, the nutrition diagnosis and documentation by the dietitian of 'malnutrition' was a modest 0.37%, increasing to over 5% of all audited health records. The overall use of any Advanced Nutrition Care practices increased from 31% during Period 1 to 63% during Period 6. The success of this multi-site study demonstrated that implementation of nutrition screening and diagnosis is feasible and leads to appropriate care. INPAC promotes efficiency in nutrition care while minimizing the risk of missing malnourished patients. Retrospectively registered ClinicalTrials.gov Identifier: NCT02800304, June 7, 2016. Copyright © 2018 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  12. Multispectra CWT-based algorithm (MCWT) in mass spectra for peak extraction.

    PubMed

    Hsueh, Huey-Miin; Kuo, Hsun-Chih; Tsai, Chen-An

    2008-01-01

    An important objective in mass spectrometry (MS) is to identify a set of biomarkers that can be used to potentially distinguish patients between distinct treatments (or conditions) from tens or hundreds of spectra. A common two-step approach involving peak extraction and quantification is employed to identify the features of scientific interest. The selected features are then used for further investigation to understand underlying biological mechanism of individual protein or for development of genomic biomarkers to early diagnosis. However, the use of inadequate or ineffective peak detection and peak alignment algorithms in peak extraction step may lead to a high rate of false positives. Also, it is crucial to reduce the false positive rate in detecting biomarkers from ten or hundreds of spectra. Here a new procedure is introduced for feature extraction in mass spectrometry data that extends the continuous wavelet transform-based (CWT-based) algorithm to multiple spectra. The proposed multispectra CWT-based algorithm (MCWT) not only can perform peak detection for multiple spectra but also carry out peak alignment at the same time. The author' MCWT algorithm constructs a reference, which integrates information of multiple raw spectra, for feature extraction. The algorithm is applied to a SELDI-TOF mass spectra data set provided by CAMDA 2006 with known polypeptide m/z positions. This new approach is easy to implement and it outperforms the existing peak extraction method from the Bioconductor PROcess package.

  13. Topography-Dependent Motion Compensation: Application to UAVSAR Data

    NASA Technical Reports Server (NTRS)

    Jones, Cathleen E.; Hensley, Scott; Michel, Thierry

    2009-01-01

    The UAVSAR L-band synthetic aperture radar system has been designed for repeat track interferometry in support of Earth science applications that require high-precision measurements of small surface deformations over timescales from hours to years. Conventional motion compensation algorithms, which are based upon assumptions of a narrow beam and flat terrain, yield unacceptably large errors in areas with even moderate topographic relief, i.e., in most areas of interest. This often limits the ability to achieve sub-centimeter surface change detection over significant portions of an acquired scene. To reduce this source of error in the interferometric phase, we have implemented an advanced motion compensation algorithm that corrects for the scene topography and radar beam width. Here we discuss the algorithm used, its implementation in the UAVSAR data processor, and the improvement in interferometric phase and correlation achieved in areas with significant topographic relief.

  14. Adaptive DFT-based Interferometer Fringe Tracking

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) observatory at Mt. Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse.

  15. Detection of the Impact of Ice Crystal Accretion in an Aircraft Engine Compression System During Dynamic Operation

    NASA Technical Reports Server (NTRS)

    May, Ryan D.; Simon, Donald L.; Guo, Ten-Huei

    2014-01-01

    The accretion of ice in the compression system of commercial gas turbine engines operating in high ice water content conditions is a safety issue being studied by the aviation community. While most of the research focuses on the underlying physics of ice accretion and the meteorological conditions in which accretion can occur, a systems-level perspective on the topic lends itself to potential near-term operational improvements. Here a detection algorithm is developed which has the capability to detect the impact of ice accretion in the Low Pressure Compressor of an aircraft engine during steady flight as well as during changes in altitude. Unfortunately, the algorithm as implemented was not able to distinguish throttle changes from ice accretion and thus more work remains to be done.

  16. Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection

    PubMed Central

    Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong

    2014-01-01

    In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505

  17. Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.

    PubMed

    Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong

    2014-10-27

    In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.

  18. Inter-satellite links for satellite autonomous integrity monitoring

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pérez, Irma; García-Serrano, Cristina; Catalán Catalán, Carlos; García, Alvaro Mozo; Tavella, Patrizia; Galleani, Lorenzo; Amarillo, Francisco

    2011-01-01

    A new integrity monitoring mechanisms to be implemented on-board on a GNSS taking advantage of inter-satellite links has been introduced. This is based on accurate range and Doppler measurements not affected neither by atmospheric delays nor ground local degradation (multipath and interference). By a linear combination of the Inter-Satellite Links Observables, appropriate observables for both satellite orbits and clock monitoring are obtained and by the proposed algorithms it is possible to reduce the time-to-alarm and the probability of undetected satellite anomalies.Several test cases have been run to assess the performances of the new orbit and clock monitoring algorithms in front of a complete scenario (satellite-to-satellite and satellite-to-ground links) and in a satellite-only scenario. The results of this experimentation campaign demonstrate that the Orbit Monitoring Algorithm is able to detect orbital feared events when the position error at the worst user location is still under acceptable limits. For instance, an unplanned manoeuvre in the along-track direction is detected (with a probability of false alarm equals to 5 × 10-9) when the position error at the worst user location is 18 cm. The experimentation also reveals that the clock monitoring algorithm is able to detect phase jumps, frequency jumps and instability degradation on the clocks but the latency of detection as well as the detection performances strongly depends on the noise added by the clock measurement system.

  19. Bladed wheels damage detection through Non-Harmonic Fourier Analysis improved algorithm

    NASA Astrophysics Data System (ADS)

    Neri, P.

    2017-05-01

    Recent papers introduced the Non-Harmonic Fourier Analysis for bladed wheels damage detection. This technique showed its potential in estimating the frequency of sinusoidal signals even when the acquisition time is short with respect to the vibration period, provided that some hypothesis are fulfilled. Anyway, previously proposed algorithms showed severe limitations in cracks detection at their early stage. The present paper proposes an improved algorithm which allows to detect a blade vibration frequency shift due to a crack whose size is really small compared to the blade width. Such a technique could be implemented for condition-based maintenance, allowing to use non-contact methods for vibration measurements. A stator-fixed laser sensor could monitor all the blades as they pass in front of the spot, giving precious information about the wheel health. This configuration determines an acquisition time for each blade which become shorter as the machine rotational speed increases. In this situation, traditional Discrete Fourier Transform analysis results in poor frequency resolution, being not suitable for small frequency shift detection. Non-Harmonic Fourier Analysis instead showed high reliability in vibration frequency estimation even with data samples collected in a short time range. A description of the improved algorithm is provided in the paper, along with a comparison with the previous one. Finally, a validation of the method is presented, based on finite element simulations results.

  20. Applied algorithm in the liner inspection of solid rocket motors

    NASA Astrophysics Data System (ADS)

    Hoffmann, Luiz Felipe Simões; Bizarria, Francisco Carlos Parquet; Bizarria, José Walter Parquet

    2018-03-01

    In rocket motors, the bonding between the solid propellant and thermal insulation is accomplished by a thin adhesive layer, known as liner. The liner application method involves a complex sequence of tasks, which includes in its final stage, the surface integrity inspection. Nowadays in Brazil, an expert carries out a thorough visual inspection to detect defects on the liner surface that may compromise the propellant interface bonding. Therefore, this paper proposes an algorithm that uses the photometric stereo technique and the K-nearest neighbor (KNN) classifier to assist the expert in the surface inspection. Photometric stereo allows the surface information recovery of the test images, while the KNN method enables image pixels classification into two classes: non-defect and defect. Tests performed on a computer vision based prototype validate the algorithm. The positive results suggest that the algorithm is feasible and when implemented in a real scenario, will be able to help the expert in detecting defective areas on the liner surface.

  1. [Purity Detection Model Update of Maize Seeds Based on Active Learning].

    PubMed

    Tang, Jin-ya; Huang, Min; Zhu, Qi-bing

    2015-08-01

    Seed purity reflects the degree of seed varieties in typical consistent characteristics, so it is great important to improve the reliability and accuracy of seed purity detection to guarantee the quality of seeds. Hyperspectral imaging can reflect the internal and external characteristics of seeds at the same time, which has been widely used in nondestructive detection of agricultural products. The essence of nondestructive detection of agricultural products using hyperspectral imaging technique is to establish the mathematical model between the spectral information and the quality of agricultural products. Since the spectral information is easily affected by the sample growth environment, the stability and generalization of model would weaken when the test samples harvested from different origin and year. Active learning algorithm was investigated to add representative samples to expand the sample space for the original model, so as to implement the rapid update of the model's ability. Random selection (RS) and Kennard-Stone algorithm (KS) were performed to compare the model update effect with active learning algorithm. The experimental results indicated that in the division of different proportion of sample set (1:1, 3:1, 4:1), the updated purity detection model for maize seeds from 2010 year which was added 40 samples selected by active learning algorithm from 2011 year increased the prediction accuracy for 2011 new samples from 47%, 33.75%, 49% to 98.89%, 98.33%, 98.33%. For the updated purity detection model of 2011 year, its prediction accuracy for 2010 new samples increased by 50.83%, 54.58%, 53.75% to 94.57%, 94.02%, 94.57% after adding 56 new samples from 2010 year. Meanwhile the effect of model updated by active learning algorithm was better than that of RS and KS. Therefore, the update for purity detection model of maize seeds is feasible by active learning algorithm.

  2. Automated seismic detection of landslides at regional scales: a Random Forest based detection algorithm

    NASA Astrophysics Data System (ADS)

    Hibert, C.; Michéa, D.; Provost, F.; Malet, J. P.; Geertsema, M.

    2017-12-01

    Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with small mass. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest machine learning algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. The processing chain is implemented to work in a High Performance Computers centre which permits to explore years of continuous seismic data rapidly. We present here the preliminary results of the application of this processing chain for years of continuous seismic record by the Alaskan permanent seismic network and Hi-Climb trans-Himalayan seismic network. The processing chain we developed also opens the possibility for a near-real time seismic detection of landslides, in association with remote-sensing automated detection from Sentinel 2 images for example.

  3. An Efficient Statistical Computation Technique for Health Care Big Data using R

    NASA Astrophysics Data System (ADS)

    Sushma Rani, N.; Srinivasa Rao, P., Dr; Parimala, P.

    2017-08-01

    Due to the changes in living conditions and other factors many critical health related problems are arising. The diagnosis of the problem at earlier stages will increase the chances of survival and fast recovery. This reduces the time of recovery and the cost associated for the treatment. One such medical related issue is cancer and breast cancer has been identified as the second leading cause of cancer death. If detected in the early stage it can be cured. Once a patient is detected with breast cancer tumor, it should be classified whether it is cancerous or non-cancerous. So the paper uses k-nearest neighbors(KNN) algorithm which is one of the simplest machine learning algorithms and is an instance-based learning algorithm to classify the data. Day-to -day new records are added which leds to increase in the data to be classified and this tends to be big data problem. The algorithm is implemented in R whichis the most popular platform applied to machine learning algorithms for statistical computing. Experimentation is conducted by using various classification evaluation metric onvarious values of k. The results show that the KNN algorithm out performes better than existing models.

  4. Mapping the Information Trace in Local Field Potentials by a Computational Method of Two-Dimensional Time-Shifting Synchronization Likelihood Based on Graphic Processing Unit Acceleration.

    PubMed

    Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You

    2017-12-01

    The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.

  5. SeaWiFS technical report series. Volume 28: SeaWiFS algorithms, part 1

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Acker, James G. (Editor); Mcclain, Charles R.; Arrigo, Kevin; Esaias, Wayne E.; Darzi, Michael; Patt, Frederick S.; Evans, Robert H.; Brown, James W.

    1995-01-01

    This document provides five brief reports that address several algorithm investigations sponsored by the Calibration and Validation Team (CVT) within the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project. This volume, therefore, has been designated as the first in a series of algorithm volumes. Chapter 1 describes the initial suite of masks, used to prevent further processing of contaminated radiometric data, and flags, which are employed to mark data whose quality (due to a variety of factors) may be suspect. In addition to providing the mask and flag algorithms, this chapter also describes the initial strategy for their implementation. Chapter 2 evaluates various strategies for the detection of clouds and ice in high latitude (polar and sub-polar regions) using Coastal Zone Color Scanner (CZCS) data. Chapter 3 presents an algorithm designed for detecting and masking coccolithosphore blooms in the open ocean. Chapter 4 outlines a proposed scheme for correcting the out-of-band response when SeaWiFS is in orbit. Chapter 5 gives a detailed description of the algorithm designed to apply sensor calibration data during the processing of level-1b data.

  6. Multi-Angle Implementation of Atmospheric Correction for MODIS (MAIAC). Part 3: Atmospheric Correction

    NASA Technical Reports Server (NTRS)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Hilker, T.; Hall, F.; Sellers, P.; Tucker, J.; Korkin, S.

    2012-01-01

    This paper describes the atmospheric correction (AC) component of the Multi-Angle Implementation of Atmospheric Correction algorithm (MAIAC) which introduces a new way to compute parameters of the Ross-Thick Li-Sparse (RTLS) Bi-directional reflectance distribution function (BRDF), spectral surface albedo and bidirectional reflectance factors (BRF) from satellite measurements obtained by the Moderate Resolution Imaging Spectroradiometer (MODIS). MAIAC uses a time series and spatial analysis for cloud detection, aerosol retrievals and atmospheric correction. It implements a moving window of up to 16 days of MODIS data gridded to 1 km resolution in a selected projection. The RTLS parameters are computed directly by fitting the cloud-free MODIS top of atmosphere (TOA) reflectance data stored in the processing queue. The RTLS retrieval is applied when the land surface is stable or changes slowly. In case of rapid or large magnitude change (as for instance caused by disturbance), MAIAC follows the MODIS operational BRDF/albedo algorithm and uses a scaling approach where the BRDF shape is assumed stable but its magnitude is adjusted based on the latest single measurement. To assess the stability of the surface, MAIAC features a change detection algorithm which analyzes relative change of reflectance in the Red and NIR bands during the accumulation period. To adjust for the reflectance variability with the sun-observer geometry and allow comparison among different days (view geometries), the BRFs are normalized to the fixed view geometry using the RTLS model. An empirical analysis of MODIS data suggests that the RTLS inversion remains robust when the relative change of geometry-normalized reflectance stays below 15%. This first of two papers introduces the algorithm, a second, companion paper illustrates its potential by analyzing MODIS data over a tropical rainforest and assessing errors and uncertainties of MAIAC compared to conventional MODIS products.

  7. DSP-Based dual-polarity mass spectrum pattern recognition for bio-detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V; Coffee, K; Gard, E

    2006-04-21

    The Bio-Aerosol Mass Spectrometry (BAMS) instrument analyzes single aerosol particles using a dual-polarity time-of-flight mass spectrometer recording simultaneously spectra of thirty to a hundred thousand points on each polarity. We describe here a real-time pattern recognition algorithm developed at Lawrence Livermore National Laboratory that has been implemented on a nine Digital Signal Processor (DSP) system from Signatec Incorporated. The algorithm first preprocesses independently the raw time-of-flight data through an adaptive baseline removal routine. The next step consists of a polarity dependent calibration to a mass-to-charge representation, reducing the data to about five hundred to a thousand channels per polarity. Themore » last step is the identification step using a pattern recognition algorithm based on a library of known particle signatures including threat agents and background particles. The identification step includes integrating the two polarities for a final identification determination using a score-based rule tree. This algorithm, operating on multiple channels per-polarity and multiple polarities, is well suited for parallel real-time processing. It has been implemented on the PMP8A from Signatec Incorporated, which is a computer based board that can interface directly to the two one-Giga-Sample digitizers (PDA1000 from Signatec Incorporated) used to record the two polarities of time-of-flight data. By using optimized data separation, pipelining, and parallel processing across the nine DSPs it is possible to achieve a processing speed of up to a thousand particles per seconds, while maintaining the recognition rate observed on a non-real time implementation. This embedded system has allowed the BAMS technology to improve its throughput and therefore its sensitivity while maintaining a large dynamic range (number of channels and two polarities) thus maintaining the systems specificity for bio-detection.« less

  8. A new approach based on the median filter to T-wave detection in ECG signal.

    PubMed

    Kholkhal, Mourad; Bereksi Reguig, Fethi

    2014-07-01

    The electrocardiogram (ECG) is one of the most used signals in the diagnosis of heart disease. It contains different waves which directly correlate to heart activity. Different methods have been used in order to detect these waves and consequently lead to heart activity diagnosis. This paper is interested more particularly to the detection of the T-wave. Such a wave represents the re-polarization state of the heart activity. The proposed approach is based on the algorithm procedure which allows the detection of the T-wave using a lot of filter including mean and median filter. The proposed algorithm is implemented and tested on a set of ECG recordings taken from, respectively, the European STT, MITBIH and MITBIH ST databases. The results are found to be very satisfactory in terms of sensitivity, predictivity and error compared to other works in the field.

  9. Development and Validation of a Spike Detection and Classification Algorithm Aimed at Implementation on Hardware Devices

    PubMed Central

    Biffi, E.; Ghezzi, D.; Pedrocchi, A.; Ferrigno, G.

    2010-01-01

    Neurons cultured in vitro on MicroElectrode Array (MEA) devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a) statistical analysis on both simulated and real signal and (b) Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems. PMID:20300592

  10. On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.

    PubMed

    McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D

    2016-01-08

    We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.

  11. Disk Crack Detection for Seeded Fault Engine Test

    NASA Technical Reports Server (NTRS)

    Luo, Huageng; Rodriguez, Hector; Hallman, Darren; Corbly, Dennis; Lewicki, David G. (Technical Monitor)

    2004-01-01

    Work was performed to develop and demonstrate vibration diagnostic techniques for the on-line detection of engine rotor disk cracks and other anomalies through a real engine test. An existing single-degree-of-freedom non-resonance-based vibration algorithm was extended to a multi-degree-of-freedom model. In addition, a resonance-based algorithm was also proposed for the case of one or more resonances. The algorithms were integrated into a diagnostic system using state-of-the- art commercial analysis equipment. The system required only non-rotating vibration signals, such as accelerometers and proximity probes, and the rotor shaft 1/rev signal to conduct the health monitoring. Before the engine test, the integrated system was tested in the laboratory by using a small rotor with controlled mass unbalances. The laboratory tests verified the system integration and both the non-resonance and the resonance-based algorithm implementations. In the engine test, the system concluded that after two weeks of cycling, the seeded fan disk flaw did not propagate to a large enough size to be detected by changes in the synchronous vibration. The unbalance induced by mass shifting during the start up and coast down was still the dominant response in the synchronous vibration.

  12. Stochastic resonance investigation of object detection in images

    NASA Astrophysics Data System (ADS)

    Repperger, Daniel W.; Pinkus, Alan R.; Skipper, Julie A.; Schrider, Christina D.

    2007-02-01

    Object detection in images was conducted using a nonlinear means of improving signal to noise ratio termed "stochastic resonance" (SR). In a recent United States patent application, it was shown that arbitrarily large signal to noise ratio gains could be realized when a signal detection problem is cast within the context of a SR filter. Signal-to-noise ratio measures were investigated. For a binary object recognition task (friendly versus hostile), the method was implemented by perturbing the recognition algorithm and subsequently thresholding via a computer simulation. To fairly test the efficacy of the proposed algorithm, a unique database of images has been constructed by modifying two sample library objects by adjusting their brightness, contrast and relative size via commercial software to gradually compromise their saliency to identification. The key to the use of the SR method is to produce a small perturbation in the identification algorithm and then to threshold the results, thus improving the overall system's ability to discern objects. A background discussion of the SR method is presented. A standard test is proposed in which object identification algorithms could be fairly compared against each other with respect to their relative performance.

  13. Data Mining for Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Biswas, Gautam; Mack, Daniel; Mylaraswamy, Dinkar; Bharadwaj, Raj

    2013-01-01

    The Vehicle Integrated Prognostics Reasoner (VIPR) program describes methods for enhanced diagnostics as well as a prognostic extension to current state of art Aircraft Diagnostic and Maintenance System (ADMS). VIPR introduced a new anomaly detection function for discovering previously undetected and undocumented situations, where there are clear deviations from nominal behavior. Once a baseline (nominal model of operations) is established, the detection and analysis is split between on-aircraft outlier generation and off-aircraft expert analysis to characterize and classify events that may not have been anticipated by individual system providers. Offline expert analysis is supported by data curation and data mining algorithms that can be applied in the contexts of supervised learning methods and unsupervised learning. In this report, we discuss efficient methods to implement the Kolmogorov complexity measure using compression algorithms, and run a systematic empirical analysis to determine the best compression measure. Our experiments established that the combination of the DZIP compression algorithm and CiDM distance measure provides the best results for capturing relevant properties of time series data encountered in aircraft operations. This combination was used as the basis for developing an unsupervised learning algorithm to define "nominal" flight segments using historical flight segments.

  14. Estimating the coordinates of pillars and posts in the parking lots for intelligent parking assist system

    NASA Astrophysics Data System (ADS)

    Choi, Jae Hyung; Kuk, Jung Gap; Kim, Young Il; Cho, Nam Ik

    2012-01-01

    This paper proposes an algorithm for the detection of pillars or posts in the video captured by a single camera implemented on the fore side of a room mirror in a car. The main purpose of this algorithm is to complement the weakness of current ultrasonic parking assist system, which does not well find the exact position of pillars or does not recognize narrow posts. The proposed algorithm is consisted of three steps: straight line detection, line tracking, and the estimation of 3D position of pillars. In the first step, the strong lines are found by the Hough transform. Second step is the combination of detection and tracking, and the third is the calculation of 3D position of the line by the analysis of trajectory of relative positions and the parameters of camera. Experiments on synthetic and real images show that the proposed method successfully locates and tracks the position of pillars, which helps the ultrasonic system to correctly locate the edges of pillars. It is believed that the proposed algorithm can also be employed as a basic element for vision based autonomous driving system.

  15. Robust and fast characterization of OCT-based optical attenuation using a novel frequency-domain algorithm for brain cancer detection

    NASA Astrophysics Data System (ADS)

    Yuan, Wu; Kut, Carmen; Liang, Wenxuan; Li, Xingde

    2017-03-01

    Cancer is known to alter the local optical properties of tissues. The detection of OCT-based optical attenuation provides a quantitative method to efficiently differentiate cancer from non-cancer tissues. In particular, the intraoperative use of quantitative OCT is able to provide a direct visual guidance in real time for accurate identification of cancer tissues, especially these without any obvious structural layers, such as brain cancer. However, current methods are suboptimal in providing high-speed and accurate OCT attenuation mapping for intraoperative brain cancer detection. In this paper, we report a novel frequency-domain (FD) algorithm to enable robust and fast characterization of optical attenuation as derived from OCT intensity images. The performance of this FD algorithm was compared with traditional fitting methods by analyzing datasets containing images from freshly resected human brain cancer and from a silica phantom acquired by a 1310 nm swept-source OCT (SS-OCT) system. With graphics processing unit (GPU)-based CUDA C/C++ implementation, this new attenuation mapping algorithm can offer robust and accurate quantitative interpretation of OCT images in real time during brain surgery.

  16. Detection of anomalous events

    DOEpatents

    Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.

    2016-06-07

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.

  17. ARPA surveillance technology for detection of targets hidden in foliage

    NASA Astrophysics Data System (ADS)

    Hoff, Lawrence E.; Stotts, Larry B.

    1994-02-01

    The processing of large quantities of synthetic aperture radar data in real time is a complex problem. Even the image formation process taxes today's most advanced computers. The use of complex algorithms with multiple channels adds another dimension to the computational problem. Advanced Research Projects Agency (ARPA) is currently planning on using the Paragon parallel processor for this task. The Paragon is small enough to allow its use in a sensor aircraft. Candidate algorithms will be implemented on the Paragon for evaluation for real time processing. In this paper ARPA technology developments for detecting targets hidden in foliage are reviewed and examples of signal processing techniques on field collected data are presented.

  18. A particle filter for multi-target tracking in track before detect context

    NASA Astrophysics Data System (ADS)

    Amrouche, Naima; Khenchaf, Ali; Berkani, Daoud

    2016-10-01

    The track-before-detect (TBD) approach can be used to track a single target in a highly noisy radar scene. This is because it makes use of unthresholded observations and incorporates a binary target existence variable into its target state estimation process when implemented as a particle filter (PF). This paper proposes the recursive PF-TBD approach to detect multiple targets in low-signal-to noise ratios (SNR). The algorithm's successful performance is demonstrated using a simulated two target example.

  19. Texture analysis with statistical methods for wheat ear extraction

    NASA Astrophysics Data System (ADS)

    Bakhouche, M.; Cointault, F.; Gouton, P.

    2007-01-01

    In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.

  20. An Underlay Communication Channel for 5G Cognitive Mesh Networks: Packet Design, Implementation, Analysis, and Experimental Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarek Haddadin; Stephen Andrew Laraway; Arslan Majid

    This paper proposes and presents the design and implementation of an underlay communication channel (UCC) for 5G cognitive mesh networks. The UCC builds its waveform based on filter bank multicarrier spread spectrum (FB-MCSS) signaling. The use of this novel spread spectrum signaling allows the device-to-device (D2D) user equipments (UEs) to communicate at a level well below noise temperature and hence, minimize taxation on macro-cell/small-cell base stations and their UEs in 5G wireless systems. Moreover, the use of filter banks allows us to avoid those portions of the spectrum that are in use by macro-cell and small-cell users. Hence, both D2D-to-cellularmore » and cellular-to-D2D interference will be very close to none. We propose a specific packet for UCC and develop algorithms for packet detection, timing acquisition and tracking, as well as channel estimation and equalization. We also present the detail of an implementation of the proposed transceiver on a software radio platform and compare our experimental results with those from a theoretical analysis of our packet detection algorithm.« less

  1. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  2. An Efficient Hardware Circuit for Spike Sorting Based on Competitive Learning Networks.

    PubMed

    Chen, Huan-Yuan; Chen, Chih-Chang; Hwang, Wen-Jyi

    2017-09-28

    This study aims to present an effective VLSI circuit for multi-channel spike sorting. The circuit supports the spike detection, feature extraction and classification operations. The detection circuit is implemented in accordance with the nonlinear energy operator algorithm. Both the peak detection and area computation operations are adopted for the realization of the hardware architecture for feature extraction. The resulting feature vectors are classified by a circuit for competitive learning (CL) neural networks. The CL circuit supports both online training and classification. In the proposed architecture, all the channels share the same detection, feature extraction, learning and classification circuits for a low area cost hardware implementation. The clock-gating technique is also employed for reducing the power dissipation. To evaluate the performance of the architecture, an application-specific integrated circuit (ASIC) implementation is presented. Experimental results demonstrate that the proposed circuit exhibits the advantages of a low chip area, a low power dissipation and a high classification success rate for spike sorting.

  3. An Efficient Hardware Circuit for Spike Sorting Based on Competitive Learning Networks

    PubMed Central

    Chen, Huan-Yuan; Chen, Chih-Chang

    2017-01-01

    This study aims to present an effective VLSI circuit for multi-channel spike sorting. The circuit supports the spike detection, feature extraction and classification operations. The detection circuit is implemented in accordance with the nonlinear energy operator algorithm. Both the peak detection and area computation operations are adopted for the realization of the hardware architecture for feature extraction. The resulting feature vectors are classified by a circuit for competitive learning (CL) neural networks. The CL circuit supports both online training and classification. In the proposed architecture, all the channels share the same detection, feature extraction, learning and classification circuits for a low area cost hardware implementation. The clock-gating technique is also employed for reducing the power dissipation. To evaluate the performance of the architecture, an application-specific integrated circuit (ASIC) implementation is presented. Experimental results demonstrate that the proposed circuit exhibits the advantages of a low chip area, a low power dissipation and a high classification success rate for spike sorting. PMID:28956859

  4. Automated detection of inaccurate and imprecise transitions in peptide quantification by multiple reaction monitoring mass spectrometry.

    PubMed

    Abbatiello, Susan E; Mani, D R; Keshishian, Hasmik; Carr, Steven A

    2010-02-01

    Multiple reaction monitoring mass spectrometry (MRM-MS) of peptides with stable isotope-labeled internal standards (SISs) is increasingly being used to develop quantitative assays for proteins in complex biological matrices. These assays can be highly precise and quantitative, but the frequent occurrence of interferences requires that MRM-MS data be manually reviewed, a time-intensive process subject to human error. We developed an algorithm that identifies inaccurate transition data based on the presence of interfering signal or inconsistent recovery among replicate samples. The algorithm objectively evaluates MRM-MS data with 2 orthogonal approaches. First, it compares the relative product ion intensities of the analyte peptide to those of the SIS peptide and uses a t-test to determine if they are significantly different. A CV is then calculated from the ratio of the analyte peak area to the SIS peak area from the sample replicates. The algorithm identified problematic transitions and achieved accuracies of 94%-100%, with a sensitivity and specificity of 83%-100% for correct identification of errant transitions. The algorithm was robust when challenged with multiple types of interferences and problematic transitions. This algorithm for automated detection of inaccurate and imprecise transitions (AuDIT) in MRM-MS data reduces the time required for manual and subjective inspection of data, improves the overall accuracy of data analysis, and is easily implemented into the standard data-analysis work flow. AuDIT currently works with results exported from MRM-MS data-processing software packages and may be implemented as an analysis tool within such software.

  5. Automated Detection of Inaccurate and Imprecise Transitions in Peptide Quantification by Multiple Reaction Monitoring Mass Spectrometry

    PubMed Central

    Abbatiello, Susan E.; Mani, D. R.; Keshishian, Hasmik; Carr, Steven A.

    2010-01-01

    BACKGROUND Multiple reaction monitoring mass spectrometry (MRM-MS) of peptides with stable isotope–labeled internal standards (SISs) is increasingly being used to develop quantitative assays for proteins in complex biological matrices. These assays can be highly precise and quantitative, but the frequent occurrence of interferences requires that MRM-MS data be manually reviewed, a time-intensive process subject to human error. We developed an algorithm that identifies inaccurate transition data based on the presence of interfering signal or inconsistent recovery among replicate samples. METHODS The algorithm objectively evaluates MRM-MS data with 2 orthogonal approaches. First, it compares the relative product ion intensities of the analyte peptide to those of the SIS peptide and uses a t-test to determine if they are significantly different. A CV is then calculated from the ratio of the analyte peak area to the SIS peak area from the sample replicates. RESULTS The algorithm identified problematic transitions and achieved accuracies of 94%–100%, with a sensitivity and specificity of 83%–100% for correct identification of errant transitions. The algorithm was robust when challenged with multiple types of interferences and problematic transitions. CONCLUSIONS This algorithm for automated detection of inaccurate and imprecise transitions (AuDIT) in MRM-MS data reduces the time required for manual and subjective inspection of data, improves the overall accuracy of data analysis, and is easily implemented into the standard data-analysis work flow. AuDIT currently works with results exported from MRM-MS data-processing software packages and may be implemented as an analysis tool within such software. PMID:20022980

  6. See-through Detection and 3D Reconstruction Using Terahertz Leaky-Wave Radar Based on Sparse Signal Processing

    NASA Astrophysics Data System (ADS)

    Murata, Koji; Murano, Kosuke; Watanabe, Issei; Kasamatsu, Akifumi; Tanaka, Toshiyuki; Monnai, Yasuaki

    2018-02-01

    We experimentally demonstrate see-through detection and 3D reconstruction using terahertz leaky-wave radar based on sparse signal processing. The application of terahertz waves to radar has received increasing attention in recent years for its potential to high-resolution and see-through detection. Among others, the implementation using a leaky-wave antenna is promising for compact system integration with beam steering capability based on frequency sweep. However, the use of a leaky-wave antenna poses a challenge on signal processing. Since a leaky-wave antenna combines the entire signal captured by each part of the aperture into a single output, the conventional array signal processing assuming access to a respective antenna element is not applicable. In this paper, we apply an iterative recovery algorithm "CoSaMP" to signals acquired with terahertz leaky-wave radar for clutter mitigation and aperture synthesis. We firstly demonstrate see-through detection of target location even when the radar is covered with an opaque screen, and therefore, the radar signal is disturbed by clutter. Furthermore, leveraging the robustness of the algorithm against noise, we also demonstrate 3D reconstruction of distributed targets by synthesizing signals collected from different orientations. The proposed approach will contribute to the smart implementation of terahertz leaky-wave radar.

  7. FPGA design for constrained energy minimization

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Chang, Chein-I.; Cao, Mang

    2004-02-01

    The Constrained Energy Minimization (CEM) has been widely used for hyperspectral detection and classification. The feasibility of implementing the CEM as a real-time processing algorithm in systolic arrays has been also demonstrated. The main challenge of realizing the CEM in hardware architecture in the computation of the inverse of the data correlation matrix performed in the CEM, which requires a complete set of data samples. In order to cope with this problem, the data correlation matrix must be calculated in a causal manner which only needs data samples up to the sample at the time it is processed. This paper presents a Field Programmable Gate Arrays (FPGA) design of such a causal CEM. The main feature of the proposed FPGA design is to use the Coordinate Rotation DIgital Computer (CORDIC) algorithm that can convert a Givens rotation of a vector to a set of shift-add operations. As a result, the CORDIC algorithm can be easily implemented in hardware architecture, therefore in FPGA. Since the computation of the inverse of the data correlction involves a series of Givens rotations, the utility of the CORDIC algorithm allows the causal CEM to perform real-time processing in FPGA. In this paper, an FPGA implementation of the causal CEM will be studied and its detailed architecture will be also described.

  8. Evaluation of machine learning algorithms for improved risk assessment for Down's syndrome.

    PubMed

    Koivu, Aki; Korpimäki, Teemu; Kivelä, Petri; Pahikkala, Tapio; Sairanen, Mikko

    2018-05-04

    Prenatal screening generates a great amount of data that is used for predicting risk of various disorders. Prenatal risk assessment is based on multiple clinical variables and overall performance is defined by how well the risk algorithm is optimized for the population in question. This article evaluates machine learning algorithms to improve performance of first trimester screening of Down syndrome. Machine learning algorithms pose an adaptive alternative to develop better risk assessment models using the existing clinical variables. Two real-world data sets were used to experiment with multiple classification algorithms. Implemented models were tested with a third, real-world, data set and performance was compared to a predicate method, a commercial risk assessment software. Best performing deep neural network model gave an area under the curve of 0.96 and detection rate of 78% with 1% false positive rate with the test data. Support vector machine model gave area under the curve of 0.95 and detection rate of 61% with 1% false positive rate with the same test data. When compared with the predicate method, the best support vector machine model was slightly inferior, but an optimized deep neural network model was able to give higher detection rates with same false positive rate or similar detection rate but with markedly lower false positive rate. This finding could further improve the first trimester screening for Down syndrome, by using existing clinical variables and a large training data derived from a specific population. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Designing and Implementing a Retrospective Earthquake Detection Framework at the U.S. Geological Survey National Earthquake Information Center

    NASA Astrophysics Data System (ADS)

    Patton, J.; Yeck, W.; Benz, H.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center (USGS NEIC) is implementing and integrating new signal detection methods such as subspace correlation, continuous beamforming, multi-band picking and automatic phase identification into near-real-time monitoring operations. Leveraging the additional information from these techniques help the NEIC utilize a large and varied network on local to global scales. The NEIC is developing an ordered, rapid, robust, and decentralized framework for distributing seismic detection data as well as a set of formalized formatting standards. These frameworks and standards enable the NEIC to implement a seismic event detection framework that supports basic tasks, including automatic arrival time picking, social media based event detections, and automatic association of different seismic detection data into seismic earthquake events. In addition, this framework enables retrospective detection processing such as automated S-wave arrival time picking given a detected event, discrimination and classification of detected events by type, back-azimuth and slowness calculations, and ensuring aftershock and induced sequence detection completeness. These processes and infrastructure improve the NEIC's capabilities, accuracy, and speed of response. In addition, this same infrastructure provides an improved and convenient structure to support access to automatic detection data for both research and algorithmic development.

  10. Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology.

    PubMed

    Hsu, Yu-Liang; Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen

    2017-07-15

    This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents' wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident's feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment.

  11. Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology

    PubMed Central

    Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen

    2017-01-01

    This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents’ wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident’s feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment. PMID:28714884

  12. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Low complexity pixel-based halftone detection

    NASA Astrophysics Data System (ADS)

    Ok, Jiheon; Han, Seong Wook; Jarno, Mielikainen; Lee, Chulhee

    2011-10-01

    With the rapid advances of the internet and other multimedia technologies, the digital document market has been growing steadily. Since most digital images use halftone technologies, quality degradation occurs when one tries to scan and reprint them. Therefore, it is necessary to extract the halftone areas to produce high quality printing. In this paper, we propose a low complexity pixel-based halftone detection algorithm. For each pixel, we considered a surrounding block. If the block contained any flat background regions, text, thin lines, or continuous or non-homogeneous regions, the pixel was classified as a non-halftone pixel. After excluding those non-halftone pixels, the remaining pixels were considered to be halftone pixels. Finally, documents were classified as pictures or photo documents by calculating the halftone pixel ratio. The proposed algorithm proved to be memory-efficient and required low computation costs. The proposed algorithm was easily implemented using GPU.

  14. Reducing the number of templates for aligned-spin compact binary coalescence gravitational wave searches using metric-agnostic template nudging

    NASA Astrophysics Data System (ADS)

    Indik, Nathaniel; Fehrmann, Henning; Harke, Franz; Krishnan, Badri; Nielsen, Alex B.

    2018-06-01

    Efficient multidimensional template placement is crucial in computationally intensive matched-filtering searches for gravitational waves (GWs). Here, we implement the neighboring cell algorithm (NCA) to improve the detection volume of an existing compact binary coalescence (CBC) template bank. This algorithm has already been successfully applied for a binary millisecond pulsar search in data from the Fermi satellite. It repositions templates from overdense regions to underdense regions and reduces the number of templates that would have been required by a stochastic method to achieve the same detection volume. Our method is readily generalizable to other CBC parameter spaces. Here we apply this method to the aligned-single-spin neutron star-black hole binary coalescence inspiral-merger-ringdown gravitational wave parameter space. We show that the template nudging algorithm can attain the equivalent effectualness of the stochastic method with 12% fewer templates.

  15. Quantification of HIV-1 DNA using real-time recombinase polymerase amplification.

    PubMed

    Crannell, Zachary Austin; Rohrman, Brittany; Richards-Kortum, Rebecca

    2014-06-17

    Although recombinase polymerase amplification (RPA) has many advantages for the detection of pathogenic nucleic acids in point-of-care applications, RPA has not yet been implemented to quantify sample concentration using a standard curve. Here, we describe a real-time RPA assay with an internal positive control and an algorithm that analyzes real-time fluorescence data to quantify HIV-1 DNA. We show that DNA concentration and the onset of detectable amplification are correlated by an exponential standard curve. In a set of experiments in which the standard curve and algorithm were used to analyze and quantify additional DNA samples, the algorithm predicted an average concentration within 1 order of magnitude of the correct concentration for all HIV-1 DNA concentrations tested. These results suggest that quantitative RPA (qRPA) may serve as a powerful tool for quantifying nucleic acids and may be adapted for use in single-sample point-of-care diagnostic systems.

  16. PCA-based artifact removal algorithm for stroke detection using UWB radar imaging.

    PubMed

    Ricci, Elisa; di Domenico, Simone; Cianca, Ernestina; Rossi, Tommaso; Diomedi, Marina

    2017-06-01

    Stroke patients should be dispatched at the highest level of care available in the shortest time. In this context, a transportable system in specialized ambulances, able to evaluate the presence of an acute brain lesion in a short time interval (i.e., few minutes), could shorten delay of treatment. UWB radar imaging is an emerging diagnostic branch that has great potential for the implementation of a transportable and low-cost device. Transportability, low cost and short response time pose challenges to the signal processing algorithms of the backscattered signals as they should guarantee good performance with a reasonably low number of antennas and low computational complexity, tightly related to the response time of the device. The paper shows that a PCA-based preprocessing algorithm can: (1) achieve good performance already with a computationally simple beamforming algorithm; (2) outperform state-of-the-art preprocessing algorithms; (3) enable a further improvement in the performance (and/or decrease in the number of antennas) by using a multistatic approach with just a modest increase in computational complexity. This is an important result toward the implementation of such a diagnostic device that could play an important role in emergency scenario.

  17. Development of a Real-Time Pulse Processing Algorithm for TES-Based X-Ray Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Tan, Hui; Hennig, Wolfgang; Warburton, William K.; Doriese, W. Bertrand; Kilbourne, Caroline A.

    2011-01-01

    We report here a real-time pulse processing algorithm for superconducting transition-edge sensor (TES) based x-ray microcalorimeters. TES-based. microca1orimeters offer ultra-high energy resolutions, but the small volume of each pixel requires that large arrays of identical microcalorimeter pixe1s be built to achieve sufficient detection efficiency. That in turn requires as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of data to a host computer for post-processing. Therefore, a real-time pulse processing algorithm that not only can be implemented in the readout electronics but also achieve satisfactory energy resolutions is desired. We have developed an algorithm that can be easily implemented. in hardware. We then tested the algorithm offline using several data sets acquired with an 8 x 8 Goddard TES x-ray calorimeter array and 2x16 NIST time-division SQUID multiplexer. We obtained an average energy resolution of close to 3.0 eV at 6 keV for the multiplexed pixels while preserving over 99% of the events in the data sets.

  18. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  19. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  20. Detection and classification of human body odor using an electronic nose.

    PubMed

    Wongchoosuk, Chatchawal; Lutz, Mario; Kerdcharoen, Teerakiat

    2009-01-01

    An electronic nose (E-nose) has been designed and equipped with software that can detect and classify human armpit body odor. An array of metal oxide sensors was used for detecting volatile organic compounds. The measurement circuit employs a voltage divider resistor to measure the sensitivity of each sensor. This E-nose was controlled by in-house developed software through a portable USB data acquisition card with a principle component analysis (PCA) algorithm implemented for pattern recognition and classification. Because gas sensor sensitivity in the detection of armpit odor samples is affected by humidity, we propose a new method and algorithms combining hardware/software for the correction of the humidity noise. After the humidity correction, the E-nose showed the capability of detecting human body odor and distinguishing the body odors from two persons in a relative manner. The E-nose is still able to recognize people, even after application of deodorant. In conclusion, this is the first report of the application of an E-nose for armpit odor recognition.

  1. A novel approach for detection of anomalies using measurement data of the Ironton-Russell bridge

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Norouzi, Mehdi; Hunt, Victor; Helmicki, Arthur

    2015-04-01

    Data models have been increasingly used in recent years for documenting normal behavior of structures and hence detect and classify anomalies. Large numbers of machine learning algorithms were proposed by various researchers to model operational and functional changes in structures; however, a limited number of studies were applied to actual measurement data due to limited access to the long term measurement data of structures and lack of access to the damaged states of structures. By monitoring the structure during construction and reviewing the effect of construction events on the measurement data, this study introduces a new approach to detect and eventually classify anomalies during construction and after construction. First, the implementation procedure of the sensory network that develops while the bridge is being built and its current status will be detailed. Second, the proposed anomaly detection algorithm will be applied on the collected data and finally, detected anomalies will be validated against the archived construction events.

  2. Detection and Classification of Human Body Odor Using an Electronic Nose

    PubMed Central

    Wongchoosuk, Chatchawal; Lutz, Mario; Kerdcharoen, Teerakiat

    2009-01-01

    An electronic nose (E-nose) has been designed and equipped with software that can detect and classify human armpit body odor. An array of metal oxide sensors was used for detecting volatile organic compounds. The measurement circuit employs a voltage divider resistor to measure the sensitivity of each sensor. This E-nose was controlled by in-house developed software through a portable USB data acquisition card with a principle component analysis (PCA) algorithm implemented for pattern recognition and classification. Because gas sensor sensitivity in the detection of armpit odor samples is affected by humidity, we propose a new method and algorithms combining hardware/software for the correction of the humidity noise. After the humidity correction, the E-nose showed the capability of detecting human body odor and distinguishing the body odors from two persons in a relative manner. The E-nose is still able to recognize people, even after application of deodorant. In conclusion, this is the first report of the application of an E-nose for armpit odor recognition. PMID:22399995

  3. Vanishing points detection using combination of fast Hough transform and deep learning

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Ingacheva, Anastasia; Nikolaev, Dmitry

    2018-04-01

    In this paper we propose a novel method for vanishing points detection based on convolutional neural network (CNN) approach and fast Hough transform algorithm. We show how to determine fast Hough transform neural network layer and how to use it in order to increase usability of the neural network approach to the vanishing point detection task. Our algorithm includes CNN with consequence of convolutional and fast Hough transform layers. We are building estimator for distribution of possible vanishing points in the image. This distribution can be used to find candidates of vanishing point. We provide experimental results from tests of suggested method using images collected from videos of road trips. Our approach shows stable result on test images with different projective distortions and noise. Described approach can be effectively implemented for mobile GPU and CPU.

  4. Perceptual Contrast Enhancement with Dynamic Range Adjustment

    PubMed Central

    Zhang, Hong; Li, Yuecheng; Chen, Hao; Yuan, Ding; Sun, Mingui

    2013-01-01

    Recent years, although great efforts have been made to improve its performance, few Histogram equalization (HE) methods take human visual perception (HVP) into account explicitly. The human visual system (HVS) is more sensitive to edges than brightness. This paper proposes to take use of this nature intuitively and develops a perceptual contrast enhancement approach with dynamic range adjustment through histogram modification. The use of perceptual contrast connects the image enhancement problem with the HVS. To pre-condition the input image before the HE procedure is implemented, a perceptual contrast map (PCM) is constructed based on the modified Difference of Gaussian (DOG) algorithm. As a result, the contrast of the image is sharpened and high frequency noise is suppressed. A modified Clipped Histogram Equalization (CHE) is also developed which improves visual quality by automatically detecting the dynamic range of the image with improved perceptual contrast. Experimental results show that the new HE algorithm outperforms several state-of-the-art algorithms in improving perceptual contrast and enhancing details. In addition, the new algorithm is simple to implement, making it suitable for real-time applications. PMID:24339452

  5. Multimodal technique to eliminate humidity interference for specific detection of ethanol.

    PubMed

    Jalal, Ahmed Hasnain; Umasankar, Yogeswaran; Gonzalez, Pablo J; Alfonso, Alejandro; Bhansali, Shekhar

    2017-01-15

    Multimodal electrochemical technique incorporating both open circuit potential (OCP) and amperometric techniques have been conceptualized and implemented to improve the detection of specific analyte in systems where more than one analyte is present. This approach has been demonstrated through the detection of ethanol while eliminating the contribution of water in a micro fuel cell sensor system. The sensor was interfaced with LMP91000 potentiostat, controlled through MSP430F5529LP microcontroller to implement an auto-calibration algorithm tailored to improve the detection of alcohol. The sensor was designed and fabricated as a three electrode system with Nafion as a proton exchange membrane (PEM). The electrochemical signal of the interfering phase (water) was eliminated by implementing the multimodal electrochemical detection technique. The results were validated by comparing sensor and potentiostat performances with a commercial sensor and potentiostat respectively. The results suggest that such a sensing system can detect ethanol at concentrations as low as 5ppm. The structure and properties such as low detection limit, selectivity and miniaturized size enables potential application of this device in wearable transdermal alcohol measurements. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Clinical and public health implications of acute and early HIV detection and treatment: a scoping review.

    PubMed

    Rutstein, Sarah E; Ananworanich, Jintanat; Fidler, Sarah; Johnson, Cheryl; Sanders, Eduard J; Sued, Omar; Saez-Cirion, Asier; Pilcher, Christopher D; Fraser, Christophe; Cohen, Myron S; Vitoria, Marco; Doherty, Meg; Tucker, Joseph D

    2017-06-28

    The unchanged global HIV incidence may be related to ignoring acute HIV infection (AHI). This scoping review examines diagnostic, clinical, and public health implications of identifying and treating persons with AHI. We searched PubMed, in addition to hand-review of key journals identifying research pertaining to AHI detection and treatment. We focused on the relative contribution of AHI to transmission and the diagnostic, clinical, and public health implications. We prioritized research from low- and middle-income countries (LMICs) published in the last fifteen years. Extensive AHI research and limited routine AHI detection and treatment have begun in LMIC. Diagnostic challenges include ease-of-use, suitability for application and distribution in LMIC, and throughput for high-volume testing. Risk score algorithms have been used in LMIC to screen for AHI among individuals with behavioural and clinical characteristics more often associated with AHI. However, algorithms have not been implemented outside research settings. From a clinical perspective, there are substantial immunological and virological benefits to identifying and treating persons with AHI - evading the irreversible damage to host immune systems and seeding of viral reservoirs that occurs during untreated acute infection. The therapeutic benefits require rapid initiation of antiretrovirals, a logistical challenge in the absence of point-of-care testing. From a public health perspective, AHI diagnosis and treatment is critical to: decrease transmission via viral load reduction and behavioural interventions; improve pre-exposure prophylaxis outcomes by avoiding treatment initiation for HIV-seronegative persons with AHI; and, enhance partner services via notification for persons recently exposed or likely transmitting. There are undeniable clinical and public health benefits to AHI detection and treatment, but also substantial diagnostic and logistical barriers to implementation and scale-up. Effective early ART initiation may be critical for HIV eradication efforts, but widespread use in LMIC requires simple and accurate diagnostic tools. Implementation research is critical to facilitate sustainable integration of AHI detection and treatment into existing health systems and will be essential for prospective evaluation of testing algorithms, point-of-care diagnostics, and efficacious and effective first-line regimens.

  7. Clinical and public health implications of acute and early HIV detection and treatment: a scoping review

    PubMed Central

    Rutstein, Sarah E.; Ananworanich, Jintanat; Fidler, Sarah; Johnson, Cheryl; Sanders, Eduard J.; Sued, Omar; Saez-Cirion, Asier; Pilcher, Christopher D.; Fraser, Christophe; Cohen, Myron S.; Vitoria, Marco; Doherty, Meg; Tucker, Joseph D.

    2017-01-01

    Abstract Introduction: The unchanged global HIV incidence may be related to ignoring acute HIV infection (AHI). This scoping review examines diagnostic, clinical, and public health implications of identifying and treating persons with AHI. Methods: We searched PubMed, in addition to hand-review of key journals identifying research pertaining to AHI detection and treatment. We focused on the relative contribution of AHI to transmission and the diagnostic, clinical, and public health implications. We prioritized research from low- and middle-income countries (LMICs) published in the last fifteen years. Results and Discussion: Extensive AHI research and limited routine AHI detection and treatment have begun in LMIC. Diagnostic challenges include ease-of-use, suitability for application and distribution in LMIC, and throughput for high-volume testing. Risk score algorithms have been used in LMIC to screen for AHI among individuals with behavioural and clinical characteristics more often associated with AHI. However, algorithms have not been implemented outside research settings. From a clinical perspective, there are substantial immunological and virological benefits to identifying and treating persons with AHI – evading the irreversible damage to host immune systems and seeding of viral reservoirs that occurs during untreated acute infection. The therapeutic benefits require rapid initiation of antiretrovirals, a logistical challenge in the absence of point-of-care testing. From a public health perspective, AHI diagnosis and treatment is critical to: decrease transmission via viral load reduction and behavioural interventions; improve pre-exposure prophylaxis outcomes by avoiding treatment initiation for HIV-seronegative persons with AHI; and, enhance partner services via notification for persons recently exposed or likely transmitting. Conclusions: There are undeniable clinical and public health benefits to AHI detection and treatment, but also substantial diagnostic and logistical barriers to implementation and scale-up. Effective early ART initiation may be critical for HIV eradication efforts, but widespread use in LMIC requires simple and accurate diagnostic tools. Implementation research is critical to facilitate sustainable integration of AHI detection and treatment into existing health systems and will be essential for prospective evaluation of testing algorithms, point-of-care diagnostics, and efficacious and effective first-line regimens. PMID:28691435

  8. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    PubMed

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.

  9. Incorporating Lightning Flash Data into the WRF-CMAQ Modeling System: Algorithms and Evaluations

    EPA Science Inventory

    We describe the use of lightning flash data from the National Lightning Detection Network (NLDN) to constrain and improve the performance of coupled meteorology-chemistry models. We recently implemented a scheme in which lightning data is used to control the triggering of conve...

  10. Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection.

    PubMed

    Urbanowicz, Ryan J; Kiralis, Jeff; Fisher, Jonathan M; Moore, Jason H

    2012-09-26

    Algorithms designed to detect complex genetic disease associations are initially evaluated using simulated datasets. Typical evaluations vary constraints that influence the correct detection of underlying models (i.e. number of loci, heritability, and minor allele frequency). Such studies neglect to account for model architecture (i.e. the unique specification and arrangement of penetrance values comprising the genetic model), which alone can influence the detectability of a model. In order to design a simulation study which efficiently takes architecture into account, a reliable metric is needed for model selection. We evaluate three metrics as predictors of relative model detection difficulty derived from previous works: (1) Penetrance table variance (PTV), (2) customized odds ratio (COR), and (3) our own Ease of Detection Measure (EDM), calculated from the penetrance values and respective genotype frequencies of each simulated genetic model. We evaluate the reliability of these metrics across three very different data search algorithms, each with the capacity to detect epistatic interactions. We find that a model's EDM and COR are each stronger predictors of model detection success than heritability. This study formally identifies and evaluates metrics which quantify model detection difficulty. We utilize these metrics to intelligently select models from a population of potential architectures. This allows for an improved simulation study design which accounts for differences in detection difficulty attributed to model architecture. We implement the calculation and utilization of EDM and COR into GAMETES, an algorithm which rapidly and precisely generates pure, strict, n-locus epistatic models.

  11. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  12. Multi-modal low cost mobile indoor surveillance system on the Robust Artificial Intelligence-based Defense Electro Robot (RAIDER)

    NASA Astrophysics Data System (ADS)

    Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.

  13. Reducing False Positives in Runtime Analysis of Deadlocks

    NASA Technical Reports Server (NTRS)

    Bensalem, Saddek; Havelund, Klaus; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper presents an improvement of a standard algorithm for detecting dead-lock potentials in multi-threaded programs, in that it reduces the number of false positives. The standard algorithm works as follows. The multi-threaded program under observation is executed, while lock and unlock events are observed. A graph of locks is built, with edges between locks symbolizing locking orders. Any cycle in the graph signifies a potential for a deadlock. The typical standard example is the group of dining philosophers sharing forks. The algorithm is interesting because it can catch deadlock potentials even though no deadlocks occur in the examined trace, and at the same time it scales very well in contrast t o more formal approaches to deadlock detection. The algorithm, however, can yield false positives (as well as false negatives). The extension of the algorithm described in this paper reduces the amount of false positives for three particular cases: when a gate lock protects a cycle, when a single thread introduces a cycle, and when the code segments in different threads that cause the cycle can actually not execute in parallel. The paper formalizes a theory for dynamic deadlock detection and compares it to model checking and static analysis techniques. It furthermore describes an implementation for analyzing Java programs and its application to two case studies: a planetary rover and a space craft altitude control system.

  14. Outlier detection in contamination control

    NASA Astrophysics Data System (ADS)

    Weintraub, Jeffrey; Warrick, Scott

    2018-03-01

    A machine-learning model is presented that effectively partitions historical process data into outlier and inlier subpopulations. This is necessary in order to avoid using outlier data to build a model for detecting process instability. Exact control limits are given without recourse to approximations and the error characteristics of the control model are derived. A worked example for contamination control is presented along with the machine learning algorithm used and all the programming statements needed for implementation.

  15. Robust spike classification based on frequency domain neural waveform features.

    PubMed

    Yang, Chenhui; Yuan, Yuan; Si, Jennie

    2013-12-01

    We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm does not require any assumptions on statistical properties of the noise and proves to be robust under noise contamination.

  16. Interactive collision detection for deformable models using streaming AABBs.

    PubMed

    Zhang, Xinyu; Kim, Young J

    2007-01-01

    We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm [2] and observed about two times improvement.

  17. Digital algorithms for parallel pipelined single-detector homodyne fringe counting in laser interferometry

    NASA Astrophysics Data System (ADS)

    Rerucha, Simon; Sarbort, Martin; Hola, Miroslava; Cizek, Martin; Hucl, Vaclav; Cip, Ondrej; Lazar, Josef

    2016-12-01

    The homodyne detection with only a single detector represents a promising approach in the interferometric application which enables a significant reduction of the optical system complexity while preserving the fundamental resolution and dynamic range of the single frequency laser interferometers. We present the design, implementation and analysis of algorithmic methods for computational processing of the single-detector interference signal based on parallel pipelined processing suitable for real time implementation on a programmable hardware platform (e.g. the FPGA - Field Programmable Gate Arrays or the SoC - System on Chip). The algorithmic methods incorporate (a) the single detector signal (sine) scaling, filtering, demodulations and mixing necessary for the second (cosine) quadrature signal reconstruction followed by a conic section projection in Cartesian plane as well as (a) the phase unwrapping together with the goniometric and linear transformations needed for the scale linearization and periodic error correction. The digital computing scheme was designed for bandwidths up to tens of megahertz which would allow to measure the displacements at the velocities around half metre per second. The algorithmic methods were tested in real-time operation with a PC-based reference implementation that employed the advantage pipelined processing by balancing the computational load among multiple processor cores. The results indicate that the algorithmic methods are suitable for a wide range of applications [3] and that they are bringing the fringe counting interferometry closer to the industrial applications due to their optical setup simplicity and robustness, computational stability, scalability and also a cost-effectiveness.

  18. A novel accelerometry-based algorithm for the detection of step durations over short episodes of gait in healthy elderly.

    PubMed

    Micó-Amigo, M Encarna; Kingma, Idsart; Ainsworth, Erik; Walgaard, Stefan; Niessen, Martijn; van Lummel, Rob C; van Dieën, Jaap H

    2016-04-19

    The assessment of short episodes of gait is clinically relevant and easily implemented, especially given limited space and time requirements. BFS (body-fixed-sensors) are small, lightweight and easy to wear sensors, which allow the assessment of gait at relative low cost and with low interference. Thus, the assessment with BFS of short episodes of gait, extracted from dailylife physical activity or measured in a standardised and supervised setting, may add value in the study of gait quality of the elderly. The aim of this study was to evaluate the accuracy of a novel algorithm based on acceleration signals recorded at different human locations (lower back and heels) for the detection of step durations over short episodes of gait in healthy elderly subjects. Twenty healthy elderly subjects (73.7 ± 7.9 years old) walked twice a distance of 5 m, wearing a BFS on the lower back, and on the outside of each heel. Moreover, an optoelectronic three-dimensional (3D) motion tracking system was used to detect step durations. A novel algorithm is presented for the detection of step durations from low-back and heel acceleration signals separately. The accuracy of the algorithm was assessed by comparing absolute differences in step duration between the three methods: step detection from the optoelectronic 3D motion tracking system, step detection from the application of the novel algorithm to low-back accelerations, and step detection from the application of the novel algorithm to heel accelerations. The proposed algorithm successfully detected all the steps, without false positives and without false negatives. Absolute average differences in step duration within trials and across subjects were calculated for each comparison, between low-back accelerations and the optoelectronic system were on average 22.4 ± 7.6 ms (4.0 ± 1.3 % of average step duration), between heel accelerations and the optoelectronic system were on average 20.7 ± 11.8 ms (3.7 ± 1.9 %), and between low-back accelerations and heel accelerations were on average 27.8 ± 15.1 ms (4.9 ± 2.5 % of average step duration). This study showed that the presented novel algorithm detects step durations over short episodes of gait in healthy elderly subjects with acceptable accuracy from low-back and heel accelerations, which provides opportunities to extract a range of gait parameters from short episodes of gait.

  19. Discrimination of Biomass Burning Smoke and Clouds in MAIAC Algorithm

    NASA Technical Reports Server (NTRS)

    Lyapustin, A.; Korkin, S.; Wang, Y.; Quayle, B.; Laszlo, I.

    2012-01-01

    The multi-angle implementation of atmospheric correction (MAIAC) algorithm makes aerosol retrievals from MODIS data at 1 km resolution providing information about the fine scale aerosol variability. This information is required in different applications such as urban air quality analysis, aerosol source identification etc. The quality of high resolution aerosol data is directly linked to the quality of cloud mask, in particular detection of small (sub-pixel) and low clouds. This work continues research in this direction, describing a technique to detect small clouds and introducing the smoke test to discriminate the biomass burning smoke from the clouds. The smoke test relies on a relative increase of aerosol absorption at MODIS wavelength 0.412 micrometers as compared to 0.47-0.67 micrometers due to multiple scattering and enhanced absorption by organic carbon released during combustion. This general principle has been successfully used in the OMI detection of absorbing aerosols based on UV measurements. This paper provides the algorithm detail and illustrates its performance on two examples of wildfires in US Pacific North-West and in Georgia/Florida of 2007.

  20. Characterizing volcanic activity: Application of freely-available webcams

    NASA Astrophysics Data System (ADS)

    Dehn, J.; Harrild, M.; Webley, P. W.

    2017-12-01

    In recent years, freely-available web-based cameras, or webcams, have become more readily available allowing an increased level of monitoring at active volcanoes across the globe. While these cameras have been extensively used as qualitative tools, they provide a unique dataset to perform quantitative analyzes of the changing behavior of the particular volcano within the cameras field of view. We focus on the multitude of these freely-available webcams and present a new algorithm to detect changes in volcanic activity using nighttime webcam data. Our approach uses a quick, efficient, and fully automated algorithm to identify changes in webcam data in near real-time, including techniques such as edge detection, Gaussian mixture models, and temporal/spatial statistical tests, which are applied to each target image. Often the image metadata (exposure, gain settings, aperture, focal length, etc.) are unknown, meaning we developed our algorithm to identify the quantity of volcanically incandescent pixels as well as the number of specific algorithm tests needed to detect thermal activity, instead of directly correlating brightness in the webcam to eruption temperatures. We compared our algorithm results to a manual analysis of webcam data for several volcanoes and determined a false detection rate of less than 3% for the automated approach. In our presentation, we describe the different tests integrated into our algorithm, lessons learned, and how we applied our method to several volcanoes across the North Pacific during its development and implementation. We will finish with a discussion on the global applicability of our approach and how to build a 24/7, 365 day a year tool that can be used as an additional data source for real-time analysis of volcanic activity.

  1. Face verification system for Android mobile devices using histogram based features

    NASA Astrophysics Data System (ADS)

    Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu

    2016-07-01

    This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.

  2. Artificial immune system algorithm in VLSI circuit configuration

    NASA Astrophysics Data System (ADS)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.

  3. Real-time skeleton tracking for embedded systems

    NASA Astrophysics Data System (ADS)

    Coleca, Foti; Klement, Sascha; Martinetz, Thomas; Barth, Erhardt

    2013-03-01

    Touch-free gesture technology is beginning to become more popular with consumers and may have a significant future impact on interfaces for digital photography. However, almost every commercial software framework for gesture and pose detection is aimed at either desktop PCs or high-powered GPUs, making mobile implementations for gesture recognition an attractive area for research and development. In this paper we present an algorithm for hand skeleton tracking and gesture recognition that runs on an ARM-based platform (Pandaboard ES, OMAP 4460 architecture). The algorithm uses self-organizing maps to fit a given topology (skeleton) into a 3D point cloud. This is a novel way of approaching the problem of pose recognition as it does not employ complex optimization techniques or data-based learning. After an initial background segmentation step, the algorithm is ran in parallel with heuristics, which detect and correct artifacts arising from insufficient or erroneous input data. We then optimize the algorithm for the ARM platform using fixed-point computation and the NEON SIMD architecture the OMAP4460 provides. We tested the algorithm with two different depth-sensing devices (Microsoft Kinect, PMD Camboard). For both input devices we were able to accurately track the skeleton at the native framerate of the cameras.

  4. An autonomous robot inspired by insect neurophysiology pursues moving features in natural environments

    NASA Astrophysics Data System (ADS)

    Bagheri, Zahra M.; Cazzolato, Benjamin S.; Grainger, Steven; O'Carroll, David C.; Wiederman, Steven D.

    2017-08-01

    Objective. Many computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem. Approach. We used our recent recordings from ‘small target motion detector’ neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour. Main results. Our robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements. Significance. Our results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system.

  5. Real time damage detection using recursive principal components and time varying auto-regressive modeling

    NASA Astrophysics Data System (ADS)

    Krishnan, M.; Bhowmik, B.; Hazra, B.; Pakrashi, V.

    2018-02-01

    In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using Recursive Principal Component Analysis (RPCA) in conjunction with Time Varying Auto-Regressive Modeling (TVAR) is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal components online using rank-one perturbation method, followed by TVAR modeling of the first transformed response, to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/non-linear-states that indicate damage. Most of the works available in the literature deal with algorithms that require windowing of the gathered data owing to their data-driven nature which renders them ineffective for online implementation. Algorithms focussed on mathematically consistent recursive techniques in a rigorous theoretical framework of structural damage detection is missing, which motivates the development of the present framework that is amenable for online implementation which could be utilized along with suite experimental and numerical investigations. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. TVAR modeling on the principal component explaining maximum variance is utilized and the damage is identified by tracking the TVAR coefficients. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data without requiring any baseline data. Numerical simulations performed on a 5-dof nonlinear system under white noise excitation and El Centro (also known as 1940 Imperial Valley earthquake) excitation, for different damage scenarios, demonstrate the robustness of the proposed algorithm. The method is further validated on results obtained from case studies involving experiments performed on a cantilever beam subjected to earthquake excitation; a two-storey benchscale model with a TMD and, data from recorded responses of UCLA factor building demonstrate the efficacy of the proposed methodology as an ideal candidate for real time, reference free structural health monitoring.

  6. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    PubMed Central

    Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill

    2012-01-01

    In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226

  7. Microwave-based medical diagnosis using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Modiri, Arezoo

    This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.

  8. Textural defect detect using a revised ant colony clustering algorithm

    NASA Astrophysics Data System (ADS)

    Zou, Chao; Xiao, Li; Wang, Bingwen

    2007-11-01

    We propose a totally novel method based on a revised ant colony clustering algorithm (ACCA) to explore the topic of textural defect detection. In this algorithm, our efforts are mainly made on the definition of local irregularity measurement and the implementation of the revised ACCA. The local irregular measurement defined evaluates the local textural inconsistency of each pixel against their mini-environment. In our revised ACCA, the behaviors of each ant are divided into two steps: release pheromone and act. The quantity of pheromone released is proportional to the irregularity measurement; the actions of the ants to act next are chosen independently of each other in a stochastic way according to some evaluated heuristic knowledge. The independency of ants implies the inherent parallel computation architecture of this algorithm. We apply the proposed method in some typical textural images with defects. From the series of pheromone distribution map (PDM), it can be clearly seen that the pheromone distribution approaches the textual defects gradually. By some post-processing, the final distribution of pheromone can demonstrate the shape and area of the defects well.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela; Perciano, Talita; Krishnan, Harinarayan

    Fibers provide exceptional strength-to-weight ratio capabilities when woven into ceramic composites, transforming them into materials with exceptional resistance to high temperature, and high strength combined with improved fracture toughness. Microcracks are inevitable when the material is under strain, which can be imaged using synchrotron X-ray computed micro-tomography (mu-CT) for assessment of material mechanical toughness variation. An important part of this analysis is to recognize fibrillar features. This paper presents algorithms for detecting and quantifying composite cracks and fiber breaks from high-resolution image stacks. First, we propose recognition algorithms to identify the different structures of the composite, including matrix cracks andmore » fibers breaks. Second, we introduce our package F3D for fast filtering of large 3D imagery, implemented in OpenCL to take advantage of graphic cards. Results show that our algorithms automatically identify micro-damage and that the GPU-based implementation introduced here takes minutes, being 17x faster than similar tools on a typical image file.« less

  10. Parallel and Scalable Clustering and Classification for Big Data in Geosciences

    NASA Astrophysics Data System (ADS)

    Riedel, M.

    2015-12-01

    Machine learning, data mining, and statistical computing are common techniques to perform analysis in earth sciences. This contribution will focus on two concrete and widely used data analytics methods suitable to analyse 'big data' in the context of geoscience use cases: clustering and classification. From the broad class of available clustering methods we focus on the density-based spatial clustering of appliactions with noise (DBSCAN) algorithm that enables the identification of outliers or interesting anomalies. A new open source parallel and scalable DBSCAN implementation will be discussed in the light of a scientific use case that detects water mixing events in the Koljoefjords. The second technique we cover is classification, with a focus set on the support vector machines algorithm (SVMs), as one of the best out-of-the-box classification algorithm. A parallel and scalable SVM implementation will be discussed in the light of a scientific use case in the field of remote sensing with 52 different classes of land cover types.

  11. Time Series Discord Detection in Medical Data using a Parallel Relational Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  12. Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  13. MUSIC algorithm DoA estimation for cooperative node location in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Warty, Chirag; Yu, Richard Wai; ElMahgoub, Khaled; Spinsante, Susanna

    In recent years the technological development has encouraged several applications based on distributed communications network without any fixed infrastructure. The problem of providing a collaborative early warning system for multiple mobile nodes against a fast moving object. The solution is provided subject to system level constraints: motion of nodes, antenna sensitivity and Doppler effect at 2.4 GHz and 5.8 GHz. This approach consists of three stages. The first phase consists of detecting the incoming object using a highly directive two element antenna at 5.0 GHz band. The second phase consists of broadcasting the warning message using a low directivity broad antenna beam using 2× 2 antenna array which then in third phase will be detected by receiving nodes by using direction of arrival (DOA) estimation technique. The DOA estimation technique is used to estimate the range and bearing of the incoming nodes. The position of fast arriving object can be estimated using the MUSIC algorithm for warning beam DOA estimation. This paper is mainly intended to demonstrate the feasibility of early detection and warning system using a collaborative node to node communication links. The simulation is performed to show the behavior of detecting and broadcasting antennas as well as performance of the detection algorithm. The idea can be further expanded to implement commercial grade detection and warning system

  14. Background noise cancellation for improved acoustic detection of manatee vocalizations

    NASA Astrophysics Data System (ADS)

    Yan, Zheng; Niezrecki, Christopher; Beusse, Diedrich O.

    2005-06-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of an increase in the number of collisions with boats. A device to alert boaters of the presence of manatees, so that a collision can be avoided, is desired. A practical implementation of the technology is dependent on the hydrophone spacing and range of detection. These parameters are primarily dependent on the manatee vocalization strength, the decay of the signal's strength with distance, and the background noise levels. An efficient method to extend the detection range by using background noise cancellation is proposed in this paper. An adaptive line enhancer (ALE) that can detect and track narrow band signals buried in broadband noise is implemented to cancel the background noise. The results indicate that the ALE algorithm can efficiently extract the manatee calls from the background noise. The improved signal-to-noise ratio of the signal can be used to extend the range of detection of manatee vocalizations and reduce the false alarm and missing detection rate in their natural habitat. .

  15. Background noise cancellation for improved acoustic detection of manatee vocalizations

    NASA Astrophysics Data System (ADS)

    Yan, Zheng; Niezrecki, Christopher; Beusse, Diedrich O.

    2005-04-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of an increase in the number of collisions with boats. A device to alert boaters of the presence of manatees, so that a collision can be avoided, is desired. Practical implementation of the technology is dependent on the hydrophone spacing and range of detection. These parameters are primarily dependent on the manatee vocalization strength, the decay of the signal strength with distance, and the background noise levels. An efficient method to extend the detection range by using background noise cancellation is proposed in this paper. An adaptive line enhancer (ALE) that can detect and track narrowband signals buried in broadband noise is implemented to cancel the background noise. The results indicate that the ALE algorithm can efficiently extract the manatee calls from the background noise. The improved signal-to-noise ratio of the signal can be used to extend the range of detection of manatee vocalizations and reduce the false alarm and missing detection rate in their natural habitat.

  16. Automatic Fringe Detection for Oil Film Interferometry Measurement of Skin Friction

    NASA Technical Reports Server (NTRS)

    Naughton, Jonathan W.; Decker, Robert K.; Jafari, Farhad

    2001-01-01

    This report summarizes two years of work on investigating algorithms for automatically detecting fringe patterns in images acquired using oil-drop interferometry for the determination of skin friction. Several different analysis methods were tested, and a combination of a windowed Fourier transform followed by a correlation was found to be most effective. The implementation of this method is discussed and details of the process are described. The results indicate that this method shows promise for automating the fringe detection process, but further testing is required.

  17. Band-pass filtering algorithms for adaptive control of compressor pre-stall modes in aircraft gas-turbine engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2018-05-01

    The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.

  18. Wavelet-Based Peak Detection and a New Charge Inference Procedure for MS/MS Implemented in ProteoWizard’s msConvert

    PubMed Central

    2015-01-01

    We report the implementation of high-quality signal processing algorithms into ProteoWizard, an efficient, open-source software package designed for analyzing proteomics tandem mass spectrometry data. Specifically, a new wavelet-based peak-picker (CantWaiT) and a precursor charge determination algorithm (Turbocharger) have been implemented. These additions into ProteoWizard provide universal tools that are independent of vendor platform for tandem mass spectrometry analyses and have particular utility for intralaboratory studies requiring the advantages of different platforms convergent on a particular workflow or for interlaboratory investigations spanning multiple platforms. We compared results from these tools to those obtained using vendor and commercial software, finding that in all cases our algorithms resulted in a comparable number of identified peptides for simple and complex samples measured on Waters, Agilent, and AB SCIEX quadrupole time-of-flight and Thermo Q-Exactive mass spectrometers. The mass accuracy of matched precursor ions also compared favorably with vendor and commercial tools. Additionally, typical analysis runtimes (∼1–100 ms per MS/MS spectrum) were short enough to enable the practical use of these high-quality signal processing tools for large clinical and research data sets. PMID:25411686

  19. Wavelet-based peak detection and a new charge inference procedure for MS/MS implemented in ProteoWizard's msConvert.

    PubMed

    French, William R; Zimmerman, Lisa J; Schilling, Birgit; Gibson, Bradford W; Miller, Christine A; Townsend, R Reid; Sherrod, Stacy D; Goodwin, Cody R; McLean, John A; Tabb, David L

    2015-02-06

    We report the implementation of high-quality signal processing algorithms into ProteoWizard, an efficient, open-source software package designed for analyzing proteomics tandem mass spectrometry data. Specifically, a new wavelet-based peak-picker (CantWaiT) and a precursor charge determination algorithm (Turbocharger) have been implemented. These additions into ProteoWizard provide universal tools that are independent of vendor platform for tandem mass spectrometry analyses and have particular utility for intralaboratory studies requiring the advantages of different platforms convergent on a particular workflow or for interlaboratory investigations spanning multiple platforms. We compared results from these tools to those obtained using vendor and commercial software, finding that in all cases our algorithms resulted in a comparable number of identified peptides for simple and complex samples measured on Waters, Agilent, and AB SCIEX quadrupole time-of-flight and Thermo Q-Exactive mass spectrometers. The mass accuracy of matched precursor ions also compared favorably with vendor and commercial tools. Additionally, typical analysis runtimes (∼1-100 ms per MS/MS spectrum) were short enough to enable the practical use of these high-quality signal processing tools for large clinical and research data sets.

  20. Proposal for An Algorithm for Screening for Undernutrition in Hospitalized Children.

    PubMed

    Huysentruyt, Koen; De Schepper, Jean; Bontems, Patrick; Alliet, Philippe; Peeters, Ellen; Roelants, Mathieu; Van Biervliet, Stephanie; Hauser, Bruno; Vandenplas, Yvan

    2016-11-01

    The prevalence of disease-related undernutrition in hospitalized children has not decreased significantly in the last decades in Europe. A recent large multicentric European study reported a percentage of underweight children ranging across countries from 4.0% to 9.3%. Nutritional screening has been put forward as a strategy to detect and prevent undernutrition in hospitalized children. It allows timely implementation of adequate nutritional support and prevents further nutritional deterioration of hospitalized children. In this article, a hands-on practical guideline for the implementation of a nutritional care program in hospitalized children is provided. The difference between nutritional status (anthropometry with or without additional technical investigations) at admission and nutritional risk (the risk of the need for a nutritional intervention or the risk for nutritional deterioration during hospital stay) is the focus of this article. Based on the quality control circle principle of Deming, a nutritional care algorithm, with detailed instructions specific for the pediatric population was developed and implementation in daily practice is proposed. Further research is required to prove the applicability and the merit of this algorithm. It can, however, serve as a basis to provide European or even wider guidelines.

  1. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers

    PubMed Central

    Filipovic, Nenad D.

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851

  2. Robust linearized image reconstruction for multifrequency EIT of the breast.

    PubMed

    Boverman, Gregory; Kao, Tzu-Jen; Kulkarni, Rujuta; Kim, Bong Seok; Isaacson, David; Saulnier, Gary J; Newell, Jonathan C

    2008-10-01

    Electrical impedance tomography (EIT) is a developing imaging modality that is beginning to show promise for detecting and characterizing tumors in the breast. At Rensselaer Polytechnic Institute, we have developed a combined EIT-tomosynthesis system that allows for the coregistered and simultaneous analysis of the breast using EIT and X-ray imaging. A significant challenge in EIT is the design of computationally efficient image reconstruction algorithms which are robust to various forms of model mismatch. Specifically, we have implemented a scaling procedure that is robust to the presence of a thin highly-resistive layer of skin at the boundary of the breast and we have developed an algorithm to detect and exclude from the image reconstruction electrodes that are in poor contact with the breast. In our initial clinical studies, it has been difficult to ensure that all electrodes make adequate contact with the breast, and thus procedures for the use of data sets containing poorly contacting electrodes are particularly important. We also present a novel, efficient method to compute the Jacobian matrix for our linearized image reconstruction algorithm by reducing the computation of the sensitivity for each voxel to a quadratic form. Initial clinical results are presented, showing the potential of our algorithms to detect and localize breast tumors.

  3. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  4. Assessing the impact of background spectral graph construction techniques on the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.; Albano, James A.; Basener, William F.

    2012-06-01

    Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels whose material content is incongruous with the background material in the scene. Typically, the application involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing these algorithms is determining which pixels initially constitute the background material within an image. The topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological model of the background in the image scene, and uses codensity to measure deviation from this background. In TAD, the initial graph theory structure of the image data is created by connecting an edge between any two pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of proximity graph is among the most well-known approaches to building a geometric graph based on a given set of data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in TAD.

  5. Obstacle Detection using Binocular Stereo Vision in Trajectory Planning for Quadcopter Navigation

    NASA Astrophysics Data System (ADS)

    Bugayong, Albert; Ramos, Manuel, Jr.

    2018-02-01

    Quadcopters are one of the most versatile unmanned aerial vehicles due to its vertical take-off and landing as well as hovering capabilities. This research uses the Sum of Absolute Differences (SAD) block matching algorithm for stereo vision. A complementary filter was used in sensor fusion to combine obtained quadcopter orientation data from the accelerometer and the gyroscope. PID control was implemented for the motor control and VFH+ algorithm was implemented for trajectory planning. Results show that the quadcopter was able to consistently actuate itself in the roll, yaw and z-axis during obstacle avoidance but was however found to be inconsistent in the pitch axis during forward and backward maneuvers due to the significant noise present in the pitch axis angle outputs compared to the roll and yaw axes.

  6. Automation of Classical QEEG Trending Methods for Early Detection of Delayed Cerebral Ischemia: More Work to Do.

    PubMed

    Wickering, Ellis; Gaspard, Nicolas; Zafar, Sahar; Moura, Valdery J; Biswal, Siddharth; Bechek, Sophia; OʼConnor, Kathryn; Rosenthal, Eric S; Westover, M Brandon

    2016-06-01

    The purpose of this study is to evaluate automated implementations of continuous EEG monitoring-based detection of delayed cerebral ischemia based on methods used in classical retrospective studies. We studied 95 patients with either Fisher 3 or Hunt Hess 4 to 5 aneurysmal subarachnoid hemorrhage who were admitted to the Neurosciences ICU and underwent continuous EEG monitoring. We implemented several variations of two classical algorithms for automated detection of delayed cerebral ischemia based on decreases in alpha-delta ratio and relative alpha variability. Of 95 patients, 43 (45%) developed delayed cerebral ischemia. Our automated implementation of the classical alpha-delta ratio-based trending method resulted in a sensitivity and specificity (Se,Sp) of (80,27)%, compared with the values of (100,76)% reported in the classic study using similar methods in a nonautomated fashion. Our automated implementation of the classical relative alpha variability-based trending method yielded (Se,Sp) values of (65,43)%, compared with (100,46)% reported in the classic study using nonautomated analysis. Our findings suggest that improved methods to detect decreases in alpha-delta ratio and relative alpha variability are needed before an automated EEG-based early delayed cerebral ischemia detection system is ready for clinical use.

  7. Noise-immune complex correlation for optical coherence angiography based on standard and Jones matrix optical coherence tomography

    PubMed Central

    Makita, Shuichi; Kurokawa, Kazuhiro; Hong, Young-Joo; Miura, Masahiro; Yasuno, Yoshiaki

    2016-01-01

    This paper describes a complex correlation mapping algorithm for optical coherence angiography (cmOCA). The proposed algorithm avoids the signal-to-noise ratio dependence and exhibits low noise in vasculature imaging. The complex correlation coefficient of the signals, rather than that of the measured data are estimated, and two-step averaging is introduced. Algorithms of motion artifact removal based on non perfusing tissue detection using correlation are developed. The algorithms are implemented with Jones-matrix OCT. Simultaneous imaging of pigmented tissue and vasculature is also achieved using degree of polarization uniformity imaging with cmOCA. An application of cmOCA to in vivo posterior human eyes is presented to demonstrate that high-contrast images of patients’ eyes can be obtained. PMID:27446673

  8. Anti-aliasing algorithm development

    NASA Astrophysics Data System (ADS)

    Bodrucki, F.; Davis, J.; Becker, J.; Cordell, J.

    2017-10-01

    In this paper, we discuss the testing image processing algorithms for mitigation of aliasing artifacts under pulsed illumination. Previously sensors were tested, one with a fixed frame rate and one with an adjustable frame rate, which results showed different degrees of operability when subjected to a Quantum Cascade Laser (QCL) laser pulsed at the frame rate of the fixe-rate sensor. We implemented algorithms to allow the adjustable frame-rate sensor to detect the presence of aliasing artifacts, and in response, to alter the frame rate of the sensor. The result was that the sensor output showed a varying laser intensity (beat note) as opposed to a fixed signal level. A MIRAGE Infrared Scene Projector (IRSP) was used to explore the efficiency of the new algorithms, introduction secondary elements into the sensor's field of view.

  9. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  10. Design and implementation of a vision-based hovering and feature tracking algorithm for a quadrotor

    NASA Astrophysics Data System (ADS)

    Lee, Y. H.; Chahl, J. S.

    2016-10-01

    This paper demonstrates an approach to the vision-based control of the unmanned quadrotors for hover and object tracking. The algorithms used the Speed Up Robust Features (SURF) algorithm to detect objects. The pose of the object in the image was then calculated in order to pass the pose information to the flight controller. Finally, the flight controller steered the quadrotor to approach the object based on the calculated pose data. The above processes was run using standard onboard resources found in the 3DR Solo quadrotor in an embedded computing environment. The obtained results showed that the algorithm behaved well during its missions, tracking and hovering, although there were significant latencies due to low CPU performance of the onboard image processing system.

  11. Integration of the TDWR and LLWAS wind shear detection system

    NASA Technical Reports Server (NTRS)

    Cornman, Larry

    1991-01-01

    Operational demonstrations of a prototype TDWR/LLWAS (Terminal Doppler Weather Radar/Low Level Wind shear Alarm System) integrated wind shear detection system were conducted. The integration of wind shear detection systems is needed to provide end-users with a single, consensus source of information. A properly implemented integrated system provides wind shear warnings of a higher quality than stand-alone LLWAS or TDWR systems. The algorithmic concept used to generate the TDWR/LLWAS integrated products and several case studies are discussed, indicating the viability and potential of integrated wind shear detection systems. Implications for integrating ground and airborne wind shear detection systems are briefly examined.

  12. Development of a space-systems network testbed

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan; Alger, Linda; Adams, Stuart; Burkhardt, Laura; Nagle, Gail; Murray, Nicholas

    1988-01-01

    This paper describes a communications network testbed which has been designed to allow the development of architectures and algorithms that meet the functional requirements of future NASA communication systems. The central hardware components of the Network Testbed are programmable circuit switching communication nodes which can be adapted by software or firmware changes to customize the testbed to particular architectures and algorithms. Fault detection, isolation, and reconfiguration has been implemented in the Network with a hybrid approach which utilizes features of both centralized and distributed techniques to provide efficient handling of faults within the Network.

  13. Digital tool for detecting diabetic retinopathy in retinography image using gabor transform

    NASA Astrophysics Data System (ADS)

    Morales, Y.; Nuñez, R.; Suarez, J.; Torres, C.

    2017-01-01

    Diabetic retinopathy is a chronic disease and is the leading cause of blindness in the population. The fundamental problem is that diabetic retinopathy is usually asymptomatic in its early stage and, in advanced stages, it becomes incurable, hence the importance of early detection. To detect diabetic retinopathy, the ophthalmologist examines the fundus by ophthalmoscopy, after sends the patient to get a Retinography. Sometimes, these retinography are not of good quality. This paper show the implementation of a digital tool that facilitates to ophthalmologist provide better patient diagnosis suffering from diabetic retinopathy, informing them that type of retinopathy has and to what degree of severity is find . This tool develops an algorithm in Matlab based on Gabor transform and in the application of digital filters to provide better and higher quality of retinography. The performance of algorithm has been compared with conventional methods obtaining resulting filtered images with better contrast and higher.

  14. Conflict Detection Algorithm to Minimize Locking for MPI-IO Atomicity

    NASA Astrophysics Data System (ADS)

    Sehrish, Saba; Wang, Jun; Thakur, Rajeev

    Many scientific applications require high-performance concurrent I/O accesses to a file by multiple processes. Those applications rely indirectly on atomic I/O capabilities in order to perform updates to structured datasets, such as those stored in HDF5 format files. Current support for atomicity in MPI-IO is provided by locking around the operations, imposing lock overhead in all situations, even though in many cases these operations are non-overlapping in the file. We propose to isolate non-overlapping accesses from overlapping ones in independent I/O cases, allowing the non-overlapping ones to proceed without imposing lock overhead. To enable this, we have implemented an efficient conflict detection algorithm in MPI-IO using MPI file views and datatypes. We show that our conflict detection scheme incurs minimal overhead on I/O operations, making it an effective mechanism for avoiding locks when they are not needed.

  15. A Frequency-Domain Implementation of a Sliding-Window Traffic Sign Detector for Large Scale Panoramic Datasets

    NASA Astrophysics Data System (ADS)

    Creusen, I. M.; Hazelhoff, L.; De With, P. H. N.

    2013-10-01

    In large-scale automatic traffic sign surveying systems, the primary computational effort is concentrated at the traffic sign detection stage. This paper focuses on reducing the computational load of particularly the sliding window object detection algorithm which is employed for traffic sign detection. Sliding-window object detectors often use a linear SVM to classify the features in a window. In this case, the classification can be seen as a convolution of the feature maps with the SVM kernel. It is well known that convolution can be efficiently implemented in the frequency domain, for kernels larger than a certain size. We show that by careful reordering of sliding-window operations, most of the frequency-domain transformations can be eliminated, leading to a substantial increase in efficiency. Additionally, we suggest to use the overlap-add method to keep the memory use within reasonable bounds. This allows us to keep all the transformed kernels in memory, thereby eliminating even more domain transformations, and allows all scales in a multiscale pyramid to be processed using the same set of transformed kernels. For a typical sliding-window implementation, we have found that the detector execution performance improves with a factor of 5.3. As a bonus, many of the detector improvements from literature, e.g. chi-squared kernel approximations, sub-class splitting algorithms etc., can be more easily applied at a lower performance penalty because of an improved scalability.

  16. Kinetic and dynamic Delaunay tetrahedralizations in three dimensions

    NASA Astrophysics Data System (ADS)

    Schaller, Gernot; Meyer-Hermann, Michael

    2004-09-01

    We describe algorithms to implement fully dynamic and kinetic three-dimensional unconstrained Delaunay triangulations, where the time evolution of the triangulation is not only governed by moving vertices but also by a changing number of vertices. We use three-dimensional simplex flip algorithms, a stochastic visibility walk algorithm for point location and in addition, we propose a new simple method of deleting vertices from an existing three-dimensional Delaunay triangulation while maintaining the Delaunay property. As an example, we analyse the performance in various cases of practical relevance. The dual Dirichlet tessellation can be used to solve differential equations on an irregular grid, to define partitions in cell tissue simulations, for collision detection etc.

  17. A proposed technique for the Venus balloon telemetry and Doppler frequency recovery

    NASA Technical Reports Server (NTRS)

    Jurgens, R. F.; Divsalar, D.

    1985-01-01

    A technique is proposed to accurately estimate the Doppler frequency and demodulate the digitally encoded telemetry signal that contains the measurements from balloon instruments. Since the data are prerecorded, one can take advantage of noncausal estimators that are both simpler and more computationally efficient than the usual closed-loop or real-time estimators for signal detection and carrier tracking. Algorithms for carrier frequency estimation subcarrier demodulation, bit and frame synchronization are described. A Viterbi decoder algorithm using a branch indexing technique has been devised to decode constraint length 6, rate 1/2 convolutional code that is being used by the balloon transmitter. These algorithms are memory efficient and can be implemented on microcomputer systems.

  18. Muon tomography imaging algorithms for nuclear threat detection inside large volume containers with the Muon Portal detector

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.

    2013-11-01

    Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.

  19. A power-efficient analog integrated circuit for amplification and detection of neural signals.

    PubMed

    Borghi, T; Bonfanti, A; Gusmeroli, R; Zambra, G; Spinelli, A S

    2008-01-01

    We present a neural amplifier that optimizes the trade-off between power consumption and noise performance down to the best so far reported. In the perspective of realizing a fully autonomous implantable system we also address the problem of spike detection by using a new simple algorithm and we discuss the implementation with analog integrated circuits. Implemented in 0.35-microm CMOS technology and with total current consumption of about 20 microA, the whole circuit occupies an area of 0.18 mm(2). Reduced power consumption and small area make it suited to be used in chronic multichannel recording systems for neural prosthetics and neuroscience experiments.

  20. Design and implementation of an electrocardiographical signal acquisition and digital processing system orientated to the detection of paroxysmal arrhythmias

    NASA Astrophysics Data System (ADS)

    Iriart Braceli, Agustín; Exequiel Morani, Jorge

    2011-12-01

    This article describes the design, technical aspects and implementation of a device capable of acquiring electrocardiograph signals; visualize them in real time over a graphic liquid crystal display (GLCD), and the storage of these ECG registers on a SD memory card. It also details a noise suppression algorithm using the Wavelet Transform. This system was specially developed to cover some bankruptcy that presents actual Holters or ECG regarding the detection of paroxysmal arrhythmias. The contribution of this work is settled on its portability and low production cost. The filtering method used provides an ECG signal without any significant noise and appropriate to the diagnosis of cardiac pathologies.

  1. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    PubMed Central

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-01-01

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193

  2. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Stephen; Heaney, Michael; Jin, Xin

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energymore » models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.« less

  3. Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Stephen; Heaney, Michael; Jin, Xin

    Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energymore » models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.« less

  4. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm.

    PubMed

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-08-13

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  5. Statistical approach for the detection of motion/noise artifacts in Photoplethysmogram.

    PubMed

    Selvaraj, Nandakumar; Mendelson, Yitzhak; Shelley, Kirk H; Silverman, David G; Chon, Ki H

    2011-01-01

    Motion and noise artifacts (MNA) have been a serious obstacle in realizing the potential of Photoplethysmogram (PPG) signals for real-time monitoring of vital signs. We present a statistical approach based on the computation of kurtosis and Shannon Entropy (SE) for the accurate detection of MNA in PPG data. The MNA detection algorithm was verified on multi-site PPG data collected from both laboratory and clinical settings. The accuracy of the fusion of kurtosis and SE metrics for the artifact detection was 99.0%, 94.8% and 93.3% in simultaneously recorded ear, finger and forehead PPGs obtained in a clinical setting, respectively. For laboratory PPG data recorded from a finger with contrived artifacts, the accuracy was 88.8%. It was identified that the measurements from the forehead PPG sensor contained the most artifacts followed by finger and ear. The proposed MNA algorithm can be implemented in real-time as the computation time was 0.14 seconds using Matlab®.

  6. Planetary-scale surface water detection from space

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Baart, F.; Winsemius, H.; Gorelick, N.

    2017-12-01

    Accurate, efficient and high-resolution methods of surface water detection are needed for a better water management. Datasets on surface water extent and dynamics are crucial for a better understanding of natural and human-made processes, and as an input data for hydrological and hydraulic models. In spite of considerable progress in the harmonization of freely available satellite data, producing accurate and efficient higher-level surface water data products remains very challenging. This presentation will provide an overview of existing methods for surface water extent and change detection from multitemporal and multi-sensor satellite imagery. An algorithm to detect surface water changes from multi-temporal satellite imagery will be demonstrated as well as its open-source implementation (http://aqua-monitor.deltares.nl). This algorithm was used to estimate global surface water changes at high spatial resolution. These changes include climate change, land reclamation, reservoir construction/decommissioning, erosion/accretion, and many other. This presentation will demonstrate how open satellite data and open platforms such as Google Earth Engine have helped with this research.

  7. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    NASA Astrophysics Data System (ADS)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  8. Towards the Automatic Detection of Pre-Existing Termite Mounds through UAS and Hyperspectral Imagery.

    PubMed

    Sandino, Juan; Wooler, Adam; Gonzalez, Felipe

    2017-09-24

    The increased technological developments in Unmanned Aerial Vehicles (UAVs) combined with artificial intelligence and Machine Learning (ML) approaches have opened the possibility of remote sensing of extensive areas of arid lands. In this paper, a novel approach towards the detection of termite mounds with the use of a UAV, hyperspectral imagery, ML and digital image processing is intended. A new pipeline process is proposed to detect termite mounds automatically and to reduce, consequently, detection times. For the classification stage, several ML classification algorithms' outcomes were studied, selecting support vector machines as the best approach for their role in image classification of pre-existing termite mounds. Various test conditions were applied to the proposed algorithm, obtaining an overall accuracy of 68%. Images with satisfactory mound detection proved that the method is "resolution-dependent". These mounds were detected regardless of their rotation and position in the aerial image. However, image distortion reduced the number of detected mounds due to the inclusion of a shape analysis method in the object detection phase, and image resolution is still determinant to obtain accurate results. Hyperspectral imagery demonstrated better capabilities to classify a huge set of materials than implementing traditional segmentation methods on RGB images only.

  9. Towards the Mitigation of Correlation Effects in the Analysis of Hyperspectral Imagery with Extensions to Robust Parameter Design

    DTIC Science & Technology

    2012-08-01

    Difference Vegetation Index ( NDVI ) ..................................... 15  2.3  Methodology...Atmospheric Compensation ........................................................................ 31  3.2.3.1  Normalized Difference Vegetation Index ( NDVI ...anomaly detection algorithms are contrasted and implemented, and explains the use of the Normalized Difference Vegetation Index ( NDVI ) in post

  10. Quantum Image Processing and Its Application to Edge Detection: Theory and Experiment

    NASA Astrophysics Data System (ADS)

    Yao, Xi-Wei; Wang, Hengyan; Liao, Zeyang; Chen, Ming-Cheng; Pan, Jian; Li, Jun; Zhang, Kechao; Lin, Xingcheng; Wang, Zhehui; Luo, Zhihuang; Zheng, Wenqiang; Li, Jianzhong; Zhao, Meisheng; Peng, Xinhua; Suter, Dieter

    2017-07-01

    Processing of digital images is continuously gaining in volume and relevance, with concomitant demands on data storage, transmission, and processing power. Encoding the image information in quantum-mechanical systems instead of classical ones and replacing classical with quantum information processing may alleviate some of these challenges. By encoding and processing the image information in quantum-mechanical systems, we here demonstrate the framework of quantum image processing, where a pure quantum state encodes the image information: we encode the pixel values in the probability amplitudes and the pixel positions in the computational basis states. Our quantum image representation reduces the required number of qubits compared to existing implementations, and we present image processing algorithms that provide exponential speed-up over their classical counterparts. For the commonly used task of detecting the edge of an image, we propose and implement a quantum algorithm that completes the task with only one single-qubit operation, independent of the size of the image. This demonstrates the potential of quantum image processing for highly efficient image and video processing in the big data era.

  11. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization.

    PubMed

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.

  12. Implementation of combined SVM-algorithm and computer-aided perception feedback for pulmonary nodule detection

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Rannou, Didier; Brennan, Patrick C.

    2012-02-01

    This pilot study examines the effect of a novel decision support system in medical image interpretation. This system is based on combining image spatial frequency properties and eye-tracking data in order to recognize over and under calling errors. Thus, before it can be implemented as a detection aided schema, training is required during which SVMbased algorithm learns to recognize FP from all reported outcomes, and, FN from all unreported prolonged dwelled regions. Eight radiologists inspected 50 PA chest radiographs with the specific task of identifying lung nodules. Twentyfive cases contained CT proven subtle malignant lesions (5-20mm), but prevalence was not known by the subjects, who took part in two sequential reading sessions, the second, without and with support system feedback. MCMR ROC DBM and JAFROC analyses were conducted and demonstrated significantly higher scores following feedback with p values of 0.04, and 0.03 respectively, highlighting significant improvements in radiology performance once feedback was used. This positive effect on radiologists' performance might have important implications for future CAD-system development.

  13. WordCluster: detecting clusters of DNA words and genomic elements

    PubMed Central

    2011-01-01

    Background Many k-mers (or DNA words) and genomic elements are known to be spatially clustered in the genome. Well established examples are the genes, TFBSs, CpG dinucleotides, microRNA genes and ultra-conserved non-coding regions. Currently, no algorithm exists to find these clusters in a statistically comprehensible way. The detection of clustering often relies on densities and sliding-window approaches or arbitrarily chosen distance thresholds. Results We introduce here an algorithm to detect clusters of DNA words (k-mers), or any other genomic element, based on the distance between consecutive copies and an assigned statistical significance. We implemented the method into a web server connected to a MySQL backend, which also determines the co-localization with gene annotations. We demonstrate the usefulness of this approach by detecting the clusters of CAG/CTG (cytosine contexts that can be methylated in undifferentiated cells), showing that the degree of methylation vary drastically between inside and outside of the clusters. As another example, we used WordCluster to search for statistically significant clusters of olfactory receptor (OR) genes in the human genome. Conclusions WordCluster seems to predict biological meaningful clusters of DNA words (k-mers) and genomic entities. The implementation of the method into a web server is available at http://bioinfo2.ugr.es/wordCluster/wordCluster.php including additional features like the detection of co-localization with gene regions or the annotation enrichment tool for functional analysis of overlapped genes. PMID:21261981

  14. Damage localization in aluminum plate with compact rectangular phased piezoelectric transducer array

    NASA Astrophysics Data System (ADS)

    Liu, Zenghua; Sun, Kunming; Song, Guorong; He, Cunfu; Wu, Bin

    2016-03-01

    In this work, a detection method for the damage in plate-like structure with a compact rectangular phased piezoelectric transducer array of 16 piezoelectric elements was presented. This compact array can not only detect and locate a single defect (through hole) in plate, but also identify multi-defects (through holes and surface defect simulated by an iron pillar glued to the plate). The experiments proved that the compact rectangular phased transducer array could detect the full range of plate structures and implement multiple-defect detection simultaneously. The processing algorithm proposed in this paper contains two parts: signal filtering and damage imaging. The former part was used to remove noise from signals. Continuous wavelet transform was applicable to signal filtering. Continuous wavelet transform can provide a plot of wavelet coefficients and the signal with narrow frequency band can be easily extracted from the plot. The latter part of processing algorithm was to implement damage detection and localization. In order to accurately locate defects and improve the imaging quality, two images were obtained from amplitude and phase information. One image was obtained with the Total Focusing Method (TFM) and another phase image was obtained with the Sign Coherence Factor (SCF). Furthermore, an image compounding technique for compact rectangular phased piezoelectric transducer array was proposed in this paper. With the proposed technique, the compounded image can be obtained by combining TFM image with SCF image, thus greatly improving the resolution and contrast of image.

  15. Stereo-Based Region-Growing using String Matching

    NASA Technical Reports Server (NTRS)

    Mandelbaum, Robert; Mintz, Max

    1995-01-01

    We present a novel stereo algorithm based on a coarse texture segmentation preprocessing phase. Matching is performed using a string comparison. Matching sub-strings correspond to matching sequences of textures. Inter-scanline clustering of matching sub-strings yields regions of matching texture. The shape of these regions yield information concerning object's height, width and azimuthal position relative to the camera pair. Hence, rather than the standard dense depth map, the output of this algorithm is a segmentation of objects in the scene. Such a format is useful for the integration of stereo with other sensor modalities on a mobile robotic platform. It is also useful for localization; the height and width of a detected object may be used for landmark recognition, while depth and relative azimuthal location determine pose. The algorithm does not rely on the monotonicity of order of image primitives. Occlusions, exposures, and foreshortening effects are not problematic. The algorithm can deal with certain types of transparencies. It is computationally efficient, and very amenable to parallel implementation. Further, the epipolar constraints may be relaxed to some small but significant degree. A version of the algorithm has been implemented and tested on various types of images. It performs best on random dot stereograms, on images with easily filtered backgrounds (as in synthetic images), and on real scenes with uncontrived backgrounds.

  16. Applying FastSLAM to Articulated Rovers

    NASA Astrophysics Data System (ADS)

    Hewitt, Robert Alexander

    This thesis presents the navigation algorithms designed for use on Kapvik, a 30 kg planetary micro-rover built for the Canadian Space Agency; the simulations used to test the algorithm; and novel techniques for terrain classification using Kapvik's LIDAR (Light Detection And Ranging) sensor. Kapvik implements a six-wheeled, skid-steered, rocker-bogie mobility system. This warrants a more complicated kinematic model for navigation than a typical 4-wheel differential drive system. The design of a 3D navigation algorithm is presented that includes nonlinear Kalman filtering and Simultaneous Localization and Mapping (SLAM). A neural network for terrain classification is used to improve navigation performance. Simulation is used to train the neural network and validate the navigation algorithms. Real world tests of the terrain classification algorithm validate the use of simulation for training and the improvement to SLAM through the reduction of extraneous LIDAR measurements in each scan.

  17. Switching algorithm for maglev train double-modular redundant positioning sensors.

    PubMed

    He, Ning; Long, Zhiqiang; Xue, Song

    2012-01-01

    High-resolution positioning for maglev trains is implemented by detecting the tooth-slot structure of the long stator installed along the rail, but there are large joint gaps between long stator sections. When a positioning sensor is below a large joint gap, its positioning signal is invalidated, thus double-modular redundant positioning sensors are introduced into the system. This paper studies switching algorithms for these redundant positioning sensors. At first, adaptive prediction is applied to the sensor signals. The prediction errors are used to trigger sensor switching. In order to enhance the reliability of the switching algorithm, wavelet analysis is introduced to suppress measuring disturbances without weakening the signal characteristics reflecting the stator joint gap based on the correlation between the wavelet coefficients of adjacent scales. The time delay characteristics of the method are analyzed to guide the algorithm simplification. Finally, the effectiveness of the simplified switching algorithm is verified through experiments.

  18. Switching Algorithm for Maglev Train Double-Modular Redundant Positioning Sensors

    PubMed Central

    He, Ning; Long, Zhiqiang; Xue, Song

    2012-01-01

    High-resolution positioning for maglev trains is implemented by detecting the tooth-slot structure of the long stator installed along the rail, but there are large joint gaps between long stator sections. When a positioning sensor is below a large joint gap, its positioning signal is invalidated, thus double-modular redundant positioning sensors are introduced into the system. This paper studies switching algorithms for these redundant positioning sensors. At first, adaptive prediction is applied to the sensor signals. The prediction errors are used to trigger sensor switching. In order to enhance the reliability of the switching algorithm, wavelet analysis is introduced to suppress measuring disturbances without weakening the signal characteristics reflecting the stator joint gap based on the correlation between the wavelet coefficients of adjacent scales. The time delay characteristics of the method are analyzed to guide the algorithm simplification. Finally, the effectiveness of the simplified switching algorithm is verified through experiments. PMID:23112657

  19. Particle swarm optimization based space debris surveillance network scheduling

    NASA Astrophysics Data System (ADS)

    Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao

    2017-02-01

    The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.

  20. Multisensor-based human detection and tracking for mobile service robots.

    PubMed

    Bellotto, Nicola; Hu, Huosheng

    2009-02-01

    One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.

  1. Sensor fusion for antipersonnel landmine detection: a case study

    NASA Astrophysics Data System (ADS)

    den Breejen, Eric; Schutte, Klamer; Cremer, Frank

    1999-08-01

    In this paper the multi sensor fusion results obtained within the European research project GEODE are presented. The layout of the test lane and the individual sensors used are described. The implementation of the SCOOP algorithm improves the ROC curves, as the false alarm surface and the number of false alarms both are taken into account. The confidence grids, as produced by the sensor manufacturers, of the sensors are used as input for the different sensor fusion methods implemented. The multisensor fusion methods implemented are Bayes, Dempster-Shafer, fuzzy probabilities and rules. The mapping of the confidence grids to the input parameters for fusion methods is an important step. Due to limited amount of the available data the entire test lane is used for training and evaluation. All four sensor fusion methods provide better detection results than the individual sensors.

  2. Integrated System Health Management (ISHM) Implementation in Rocket Engine Testing

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Morris, Jon; Turowski, Mark; Franzl, Richard; Walker, Mark; Kapadia, Ravi; Venkatesh, Meera

    2010-01-01

    A pilot operational ISHM capability has been implemented for the E-2 Rocket Engine Test Stand (RETS) and a Chemical Steam Generator (CSG) test article at NASA Stennis Space Center. The implementation currently includes an ISHM computer and a large display in the control room. The paper will address the overall approach, tools, and requirements. It will also address the infrastructure and architecture. Specific anomaly detection algorithms will be discussed regarding leak detection and diagnostics, valve validation, and sensor validation. It will also describe development and use of a Health Assessment Database System (HADS) as a repository for measurements, health, configuration, and knowledge related to a system with ISHM capability. It will conclude with a discussion of user interfaces, and a description of the operation of the ISHM system prior, during, and after testing.

  3. A parallel implementation of a multisensor feature-based range-estimation method

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1993-01-01

    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.

  4. A comparative analysis of DBSCAN, K-means, and quadratic variation algorithms for automatic identification of swallows from swallowing accelerometry signals.

    PubMed

    Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin

    2015-04-01

    Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A Comparative Analysis of DBSCAN, K-Means, and Quadratic Variation Algorithms for Automatic Identification of Swallows from Swallowing Accelerometry Signals

    PubMed Central

    Dudik, Joshua M.; Kurosu, Atsuko; Coyle, James L

    2015-01-01

    Background Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. Methods In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Results Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differen-tiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. Conclusions In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. PMID:25658505

  6. Space Debris Detection on the HPDP, a Coarse-Grained Reconfigurable Array Architecture for Space

    NASA Astrophysics Data System (ADS)

    Suarez, Diego Andres; Bretz, Daniel; Helfers, Tim; Weidendorfer, Josef; Utzmann, Jens

    2016-08-01

    Stream processing, widely used in communications and digital signal processing applications, requires high- throughput data processing that is achieved in most cases using Application-Specific Integrated Circuit (ASIC) designs. Lack of programmability is an issue especially in space applications, which use on-board components with long life-cycles requiring applications updates. To this end, the High Performance Data Processor (HPDP) architecture integrates an array of coarse-grained reconfigurable elements to provide both flexible and efficient computational power suitable for stream-based data processing applications in space. In this work the capabilities of the HPDP architecture are demonstrated with the implementation of a real-time image processing algorithm for space debris detection in a space-based space surveillance system. The implementation challenges and alternatives are described making trade-offs to improve performance at the expense of negligible degradation of detection accuracy. The proposed implementation uses over 99% of the available computational resources. Performance estimations based on simulations show that the HPDP can amply match the application requirements.

  7. Multiple-Threshold Event Detection and Other Enhancements to the Virtual Seismologist (VS) Earthquake Early Warning Algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, M.; Caprio, M.; Cua, G. B.; Heaton, T. H.; Clinton, J. F.; Wiemer, S.

    2009-12-01

    The Virtual Seismologist (VS) algorithm is a Bayesian approach to earthquake early warning (EEW) being implemented by the Swiss Seismological Service at ETH Zurich. The application of Bayes’ theorem in earthquake early warning states that the most probable source estimate at any given time is a combination of contributions from a likelihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS algorithm was one of three EEW algorithms involved in the California Integrated Seismic Network (CISN) real-time EEW testing and performance evaluation effort. Its compelling real-time performance in California over the last three years has led to its inclusion in the new USGS-funded effort to develop key components of CISN ShakeAlert, a prototype EEW system that could potentially be implemented in California. A significant portion of VS code development was supported by the SAFER EEW project in Europe. We discuss recent enhancements to the VS EEW algorithm. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to be declared an event to reduce false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and it requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) to a hybrid on-site/regional approach capable of providing a continuously evolving stream of EEW information starting from the first P-detection. Offline analysis on Swiss and California waveform datasets indicate that the multiple-threshold approach is faster and more reliable for larger events than the earlier version of the VS codes. This multiple-threshold approach is well-suited for implementation on a wide range of devices, from embedded processor systems installed at a seismic stations, to small autonomous networks for local warnings, to large-scale regional networks such as the CISN. In addition, we quantify the influence of systematic use of prior information and Vs30-based corrections for site amplification on VS magnitude estimation performance, and describe how components of the VS algorithm will be integrated into non-EEW standard network processing procedures at CHNet, the national broadband / strong motion network in Switzerland. These enhancements to the VS codes will be transitioned from off-line to real-time testing at CHNet in Europe in the coming months, and will be incorporated into the development of key components of CISN ShakeAlert prototype system in California.

  8. Solar Demon: near real-time solar eruptive event detection on SDO/AIA images

    NASA Astrophysics Data System (ADS)

    Kraaikamp, Emil; Verbeeck, Cis

    Solar flares, dimmings and EUV waves have been observed routinely in extreme ultra-violet (EUV) images of the Sun since 1996. These events are closely associated with coronal mass ejections (CMEs), and therefore provide useful information for early space weather alerts. The Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) generates such a massive dataset that it becomes impossible to find most of these eruptive events manually. Solar Demon is a set of automatic detection algorithms that attempts to solve this problem by providing both near real-time warnings of eruptive events and a catalog of characterized events. Solar Demon has been designed to detect and characterize dimmings, EUV waves, as well as solar flares in near real-time on SDO/AIA data. The detection modules are running continuously at the Royal Observatory of Belgium on both quick-look data and synoptic science data. The output of Solar Demon can be accessed in near real-time on the Solar Demon website, and includes images, movies, light curves, and the numerical evolution of several parameters. Solar Demon is the result of collaboration between the FP7 projects AFFECTS and COMESEP. Flare detections of Solar Demon are integrated into the COMESEP alert system. Here we present the Solar Demon detection algorithms and their output. We will focus on the algorithm and its operational implementation. Examples of interesting flare, dimming and EUV wave events, and general statistics of the detections made so far during solar cycle 24 will be presented as well.

  9. Task Performance with List-Mode Data

    NASA Astrophysics Data System (ADS)

    Caucci, Luca

    This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.

  10. Spot measurement of heart rate based on morphology of PhotoPlethysmoGraphic (PPG) signals.

    PubMed

    Madhan Mohan, P; Nagarajan, V; Vignesh, J C

    2017-02-01

    Due to increasing health consciousness among people, it is imperative to have low-cost health care devices to measure the vital parameters like heart rate and arterial oxygen saturation (SpO 2 ). In this paper, an efficient heart rate monitoring algorithm based on the morphology of photoplethysmography (PPG) signals to measure the spot heart rate (HR) and its real-time implementation is proposed. The algorithm does pre-processing and detects the onsets and systolic peaks of the PPG signal to estimate the heart rate of the subject. Since the algorithm is based on the morphology of the signal, it works well when the subject is not moving, which is a typical test case. So, this algorithm is developed mainly to measure the heart rate at on-demand applications. Real-time experimental results indicate the heart rate accuracy of 99.5%, mean absolute percentage error (MAPE) of 1.65%, mean absolute error (MAE) of 1.18 BPM and reference closeness factor (RCF) of 0.988. The results further show that the average response time of the algorithm to give the spot HR is 6.85 s, so that the users need not wait longer to see their HR. The hardware implementation results show that the algorithm only requires 18 KBytes of total memory and runs at high speed with 0.85 MIPS. So, this algorithm can be targeted to low-cost embedded platforms.

  11. Development and validation of an automated ventilator-associated event electronic surveillance system: A report of a successful implementation.

    PubMed

    Hebert, Courtney; Flaherty, Jennifer; Smyer, Justin; Ding, Jing; Mangino, Julie E

    2018-03-01

    Surveillance is an important tool for infection control; however, this task can often be time-consuming and take away from infection prevention activities. With the increasing availability of comprehensive electronic health records, there is an opportunity to automate these surveillance activities. The objective of this article is to describe the implementation of an electronic algorithm for ventilator-associated events (VAEs) at a large academic medical center METHODS: This article reports on a 6-month manual validation of a dashboard for VAEs. We developed a computerized algorithm for automatically detecting VAEs and compared the output of this algorithm to the traditional, manual method of VAE surveillance. Manual surveillance by the infection preventionists identified 13 possible and 11 probable ventilator-associated pneumonias (VAPs), and the VAE dashboard identified 16 possible and 13 probable VAPs. The dashboard had 100% sensitivity and 100% accuracy when compared with manual surveillance for possible and probable VAP. We report on the successfully implemented VAE dashboard. Workflow of the infection preventionists was simplified after implementation of the dashboard with subjective time-savings reported. Implementing a computerized dashboard for VAE surveillance at a medical center with a comprehensive electronic health record is feasible; however, this required significant initial and ongoing work on the part of data analysts and infection preventionists. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  12. An automatic system to detect and extract texts in medical images for de-identification

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael

    2010-03-01

    Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.

  13. Diffusion tensor driven contour closing for cell microinjection targeting.

    PubMed

    Becattini, Gabriele; Mattos, Leonardo S; Caldwell, Darwin G

    2010-01-01

    This article introduces a novel approach to robust automatic detection of unstained living cells in bright-field (BF) microscope images with the goal of producing a target list for an automated microinjection system. The overall image analysis process is described and includes: preprocessing, ridge enhancement, image segmentation, shape analysis and injection point definition. The developed algorithm implements a new version of anisotropic contour completion (ACC) based on the partial differential equation (PDE) for heat diffusion which improves the cell segmentation process by elongating the edges only along their tangent direction. The developed ACC algorithm is equivalent to a dilation of the binary edge image with a continuous elliptic structural element that takes into account local orientation of the contours preventing extension towards normal direction. Experiments carried out on real images of 10 to 50 microm CHO-K1 adherent cells show a remarkable reliability in the algorithm along with up to 85% success for cell detection and injection point definition.

  14. Automated segmentation of knee and ankle regions of rats from CT images to quantify bone mineral density for monitoring treatments of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Cruz, Francisco; Sevilla, Raquel; Zhu, Joe; Vanko, Amy; Lee, Jung Hoon; Dogdas, Belma; Zhang, Weisheng

    2014-03-01

    Bone mineral density (BMD) obtained from a CT image is an imaging biomarker used pre-clinically for characterizing the Rheumatoid arthritis (RA) phenotype. We use this biomarker in animal studies for evaluating disease progression and for testing various compounds. In the current setting, BMD measurements are obtained manually by selecting the regions of interest from three-dimensional (3-D) CT images of rat legs, which results in a laborious and low-throughput process. Combining image processing techniques, such as intensity thresholding and skeletonization, with mathematical techniques in curve fitting and curvature calculations, we developed an algorithm for quick, consistent, and automatic detection of joints in large CT data sets. The implemented algorithm has reduced analysis time for a study with 200 CT images from 10 days to 3 days and has improved the robust detection of the obtained regions of interest compared with manual segmentation. This algorithm has been used successfully in over 40 studies.

  15. Vector network analyzer ferromagnetic resonance spectrometer with field differential detection

    NASA Astrophysics Data System (ADS)

    Tamaru, S.; Tsunegi, S.; Kubota, H.; Yuasa, S.

    2018-05-01

    This work presents a vector network analyzer ferromagnetic resonance (VNA-FMR) spectrometer with field differential detection. This technique differentiates the S-parameter by applying a small binary modulation field in addition to the DC bias field to the sample. By setting the modulation frequency sufficiently high, slow sensitivity fluctuations of the VNA, i.e., low-frequency components of the trace noise, which limit the signal-to-noise ratio of the conventional VNA-FMR spectrometer, can be effectively removed, resulting in a very clean FMR signal. This paper presents the details of the hardware implementation and measurement sequence as well as the data processing and analysis algorithms tailored for the FMR spectrum obtained with this technique. Because the VNA measures a complex S-parameter, it is possible to estimate the Gilbert damping parameter from the slope of the phase variation of the S-parameter with respect to the bias field. We show that this algorithm is more robust against noise than the conventional algorithm based on the linewidth.

  16. Sensory processing and world modeling for an active ranging device

    NASA Technical Reports Server (NTRS)

    Hong, Tsai-Hong; Wu, Angela Y.

    1991-01-01

    In this project, we studied world modeling and sensory processing for laser range data. World Model data representation and operation were defined. Sensory processing algorithms for point processing and linear feature detection were designed and implemented. The interface between world modeling and sensory processing in the Servo and Primitive levels was investigated and implemented. In the primitive level, linear features detectors for edges were also implemented, analyzed and compared. The existing world model representations is surveyed. Also presented is the design and implementation of the Y-frame model, a hierarchical world model. The interfaces between the world model module and the sensory processing module are discussed as well as the linear feature detectors that were designed and implemented.

  17. A survey of artificial immune system based intrusion detection.

    PubMed

    Yang, Hua; Li, Tao; Hu, Xinlei; Wang, Feng; Zou, Yang

    2014-01-01

    In the area of computer security, Intrusion Detection (ID) is a mechanism that attempts to discover abnormal access to computers by analyzing various interactions. There is a lot of literature about ID, but this study only surveys the approaches based on Artificial Immune System (AIS). The use of AIS in ID is an appealing concept in current techniques. This paper summarizes AIS based ID methods from a new view point; moreover, a framework is proposed for the design of AIS based ID Systems (IDSs). This framework is analyzed and discussed based on three core aspects: antibody/antigen encoding, generation algorithm, and evolution mode. Then we collate the commonly used algorithms, their implementation characteristics, and the development of IDSs into this framework. Finally, some of the future challenges in this area are also highlighted.

  18. On the synchronizability and detectability of random PPM sequences

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.; Lin, Shu

    1987-01-01

    The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum-likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds derived on the symbol error probability as well as the probability of false synchronization indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.

  19. On the synchronizability and detectability of random PPM sequences

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.

    1987-01-01

    The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds were derived on the symbol error probability as well as the probability of false synchronization that indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.

  20. A HW-SW Co-Designed System for the Lunar Lander Hazard Detection and Avoidance Breadboarding

    NASA Astrophysics Data System (ADS)

    Palomo, Pedro; Latorre, Antonio; Valle, Carlos; Gomez de Aguero, Sergio; Hagenfeldt, Miguel; Parreira, Baltazar; Lindoso, Almudena; Portela, Marta; Garcia, Mario; San Millan, Enrique; Zharikov, Yuri; Entrena, Luis

    2014-08-01

    This paper presents the HW-SW co-design approach followed to tackle the design of the Hazard Detection and Avoidance (HDA) system breadboarding for the Lunar Lander ESA mission, undertaken given the fact that novel GNC technologies used to promote autonomous systems demand processing capabilities that current (and forthcoming) space processors are not able to satisfy. The paper shows how the current system design has been performed in a process in which the original HDA functionally validated design has been partitioned between SW (deemed for execution in a microprocessor) and HW algorithms (to be executed in an FPGA), considering the performance requirements and resorting to a deep analysis of the algorithms in view of their adequacy to HW or SW implementation.

  1. RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error.

    PubMed

    Huang, Chengqiang; Yang, Youchang; Wu, Bo; Yu, Weize

    2018-06-01

    The sub-pixel arrangement of the RGBG panel and the image with RGB format are different and the algorithm that converts RGB to RGBG is urgently needed to display an image with RGB arrangement on the RGBG panel. However, the information loss is still large although color fringing artifacts are weakened in the published papers that study this conversion. In this paper, an RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error (EDMSE) is proposed. The main points of innovation include the following: (1) the edge detection is first proposed to distinguish image details with serious color fringing artifacts and image details which are prone to be lost in the process of RGB-RGBG conversion; (2) for image details with serious color fringing artifacts, the weighting factor 0.5 is applied to weaken the color fringing artifacts; and (3) for image details that are prone to be lost in the process of RGB-RGBG conversion, a special mechanism to minimize square error is proposed. The experiment shows that the color fringing artifacts are slightly improved by EDMSE, and the values of MSE of the image processed are 19.6% and 7% smaller than those of the image processed by the direct assignment and weighting factor algorithm, respectively. The proposed algorithm is implemented on a field programmable gate array to enable the image display on the RGBG panel.

  2. A high performance parallel computing architecture for robust image features

    NASA Astrophysics Data System (ADS)

    Zhou, Renyan; Liu, Leibo; Wei, Shaojun

    2014-03-01

    A design of parallel architecture for image feature detection and description is proposed in this article. The major component of this architecture is a 2D cellular network composed of simple reprogrammable processors, enabling the Hessian Blob Detector and Haar Response Calculation, which are the most computing-intensive stage of the Speeded Up Robust Features (SURF) algorithm. Combining this 2D cellular network and dedicated hardware for SURF descriptors, this architecture achieves real-time image feature detection with minimal software in the host processor. A prototype FPGA implementation of the proposed architecture achieves 1318.9 GOPS general pixel processing @ 100 MHz clock and achieves up to 118 fps in VGA (640 × 480) image feature detection. The proposed architecture is stand-alone and scalable so it is easy to be migrated into VLSI implementation.

  3. The Impact of a Line Probe Assay Based Diagnostic Algorithm on Time to Treatment Initiation and Treatment Outcomes for Multidrug Resistant TB Patients in Arkhangelsk Region, Russia.

    PubMed

    Eliseev, Platon; Balantcev, Grigory; Nikishova, Elena; Gaida, Anastasia; Bogdanova, Elena; Enarson, Donald; Ornstein, Tara; Detjen, Anne; Dacombe, Russell; Gospodarevskaya, Elena; Phillips, Patrick P J; Mann, Gillian; Squire, Stephen Bertel; Mariandyshev, Andrei

    2016-01-01

    In the Arkhangelsk region of Northern Russia, multidrug-resistant (MDR) tuberculosis (TB) rates in new cases are amongst the highest in the world. In 2014, MDR-TB rates reached 31.7% among new cases and 56.9% among retreatment cases. The development of new diagnostic tools allows for faster detection of both TB and MDR-TB and should lead to reduced transmission by earlier initiation of anti-TB therapy. The PROVE-IT (Policy Relevant Outcomes from Validating Evidence on Impact) Russia study aimed to assess the impact of the implementation of line probe assay (LPA) as part of an LPA-based diagnostic algorithm for patients with presumptive MDR-TB focusing on time to treatment initiation with time from first-care seeking visit to the initiation of MDR-TB treatment rather than diagnostic accuracy as the primary outcome, and to assess treatment outcomes. We hypothesized that the implementation of LPA would result in faster time to treatment initiation and better treatment outcomes. A culture-based diagnostic algorithm used prior to LPA implementation was compared to an LPA-based algorithm that replaced BacTAlert and Löwenstein Jensen (LJ) for drug sensitivity testing. A total of 295 MDR-TB patients were included in the study, 163 diagnosed with the culture-based algorithm, 132 with the LPA-based algorithm. Among smear positive patients, the implementation of the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 50 and 66 days compared to the culture-based algorithm (BacTAlert and LJ respectively, p<0.001). In smear negative patients, the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 78 days when compared to the culture-based algorithm (LJ, p<0.001). However, several weeks were still needed for treatment initiation in LPA-based algorithm, 24 days in smear positive, and 62 days in smear negative patients. Overall treatment outcomes were better in LPA-based algorithm compared to culture-based algorithm (p = 0.003). Treatment success rates at 20 months of treatment were higher in patients diagnosed with the LPA-based algorithm (65.2%) as compared to those diagnosed with the culture-based algorithm (44.8%). Mortality was also lower in the LPA-based algorithm group (7.6%) compared to the culture-based algorithm group (15.9%). There was no statistically significant difference in smear and culture conversion rates between the two algorithms. The results of the study suggest that the introduction of LPA leads to faster time to MDR diagnosis and earlier treatment initiation as well as better treatment outcomes for patients with MDR-TB. These findings also highlight the need for further improvements within the health system to reduce both patient and diagnostic delays to truly optimize the impact of new, rapid diagnostics.

  4. Image-based spectroscopy for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind

    2014-03-01

    An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.

  5. Optical Follow-up of Gravitational-wave Events with Las Cumbres Observatory

    NASA Astrophysics Data System (ADS)

    Arcavi, Iair; McCully, Curtis; Hosseinzadeh, Griffin; Howell, D. Andrew; Vasylyev, Sergiy; Poznanski, Dovi; Zaltzman, Michael; Maoz, Dan; Singer, Leo; Valenti, Stefano; Kasen, Daniel; Barnes, Jennifer; Piran, Tsvi; Fong, Wen-fai

    2017-10-01

    We present an implementation of the Gehrels et al. galaxy-targeted strategy for gravitational-wave (GW) follow-up using the Las Cumbres Observatory global network of telescopes. We use the Galaxy List for the Advanced Detector Era (GLADE) galaxy catalog, which we show is complete (with respect to a Schechter function) out to ˜300 Mpc for galaxies brighter than the median Schechter function galaxy luminosity. We use a prioritization algorithm to select the galaxies with the highest chance of containing the counterpart given their luminosity, their position, and their distance relative to a GW localization, and in which we are most likely to detect a counterpart given its expected brightness compared to the limiting magnitude of our telescopes. This algorithm can be easily adapted to any expected transient parameters and telescopes. We implemented this strategy during the second Advanced Detector Observing Run (O2) and followed the black hole merger GW170814 and the neutron star merger GW170817. For the latter, we identified an optical kilonova/macronova counterpart thanks to our algorithm selecting the correct host galaxy fifth in its ranked list among the 182 galaxies we identified in the Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo localization. This also allowed us to obtain some of the earliest observations of the first optical transient ever triggered by a GW detection (as presented in a companion paper).

  6. Optical Follow-up of Gravitational-wave Events with Las Cumbres Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arcavi, Iair; McCully, Curtis; Hosseinzadeh, Griffin

    We present an implementation of the Gehrels et al. galaxy-targeted strategy for gravitational-wave (GW) follow-up using the Las Cumbres Observatory global network of telescopes. We use the Galaxy List for the Advanced Detector Era (GLADE) galaxy catalog, which we show is complete (with respect to a Schechter function) out to ∼300 Mpc for galaxies brighter than the median Schechter function galaxy luminosity. We use a prioritization algorithm to select the galaxies with the highest chance of containing the counterpart given their luminosity, their position, and their distance relative to a GW localization, and in which we are most likely tomore » detect a counterpart given its expected brightness compared to the limiting magnitude of our telescopes. This algorithm can be easily adapted to any expected transient parameters and telescopes. We implemented this strategy during the second Advanced Detector Observing Run (O2) and followed the black hole merger GW170814 and the neutron star merger GW170817. For the latter, we identified an optical kilonova/macronova counterpart thanks to our algorithm selecting the correct host galaxy fifth in its ranked list among the 182 galaxies we identified in the Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo localization. This also allowed us to obtain some of the earliest observations of the first optical transient ever triggered by a GW detection (as presented in a companion paper).« less

  7. A 16-Channel Nonparametric Spike Detection ASIC Based on EC-PC Decomposition.

    PubMed

    Wu, Tong; Xu, Jian; Lian, Yong; Khalili, Azam; Rastegarnia, Amir; Guan, Cuntai; Yang, Zhi

    2016-02-01

    In extracellular neural recording experiments, detecting neural spikes is an important step for reliable information decoding. A successful implementation in integrated circuits can achieve substantial data volume reduction, potentially enabling a wireless operation and closed-loop system. In this paper, we report a 16-channel neural spike detection chip based on a customized spike detection method named as exponential component-polynomial component (EC-PC) algorithm. This algorithm features a reliable prediction of spikes by applying a probability threshold. The chip takes raw data as input and outputs three data streams simultaneously: field potentials, band-pass filtered neural data, and spiking probability maps. The algorithm parameters are on-chip configured automatically based on input data, which avoids manual parameter tuning. The chip has been tested with both in vivo experiments for functional verification and bench-top experiments for quantitative performance assessment. The system has a total power consumption of 1.36 mW and occupies an area of 6.71 mm (2) for 16 channels. When tested on synthesized datasets with spikes and noise segments extracted from in vivo preparations and scaled according to required precisions, the chip outperforms other detectors. A credit card sized prototype board is developed to provide power and data management through a USB port.

  8. Implementation theory of distortion-invariant pattern recognition for optical and digital signal processing systems

    NASA Astrophysics Data System (ADS)

    Lhamon, Michael Earl

    A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.

  9. Diagnosis of paediatric HIV infection in a primary health care setting with a clinical algorithm.

    PubMed Central

    Horwood, C.; Liebeschuetz, S.; Blaauw, D.; Cassol, S.; Qazi, S.

    2003-01-01

    OBJECTIVE: To determine the validity of an algorithm used by primary care health workers to identify children with symptomatic human immunodeficiency virus (HIV) infection. This HIV algorithm is being implemented in South Africa as part of the Integrated Management of Childhood Illness (IMCI), a strategy that aims to improve childhood morbidity and mortality by improving care at the primary care level. As AIDS is a leading cause of death in children in southern Africa, diagnosis and management of symptomatic HIV infection was added to the existing IMCI algorithm. METHODS: In total, 690 children who attended the outpatients department in a district hospital in South Africa were assessed with the HIV algorithm and by a paediatrician. All children were then tested for HIV viral load. The validity of the algorithm in detecting symptomatic HIV was compared with clinical diagnosis by a paediatrician and the result of an HIV test. Detailed clinical data were used to improve the algorithm. FINDINGS: Overall, 198 (28.7%) enrolled children were infected with HIV. The paediatrician correctly identified 142 (71.7%) children infected with HIV, whereas the IMCI/HIV algorithm identified 111 (56.1%). Odds ratios were calculated to identify predictors of HIV infection and used to develop an improved HIV algorithm that is 67.2% sensitive and 81.5% specific in clinically detecting HIV infection. CONCLUSIONS: Children with symptomatic HIV infection can be identified effectively by primary level health workers through the use of an algorithm. The improved HIV algorithm developed in this study could be used by countries with high prevalences of HIV to enable IMCI practitioners to identify and care for HIV-infected children. PMID:14997238

  10. Radionuclide identification algorithm for organic scintillator-based radiation portal monitor

    NASA Astrophysics Data System (ADS)

    Paff, Marc Gerrit; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.

    2017-03-01

    We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.

  11. Fuzzy logic-based approach to detecting a passive RFID tag in an outpatient clinic.

    PubMed

    Min, Daiki; Yih, Yuehwern

    2011-06-01

    This study is motivated by the observations on the data collected by radio frequency identification (RFID) readers in a pilot study, which was used to investigate the feasibility of implementing an RFID-based monitoring system in an outpatient eye clinic. The raw RFID data collected from RFID readers contain noise and missing reads, which prevent us from determining the tag location. In this paper, fuzzy logic-based algorithms are proposed to interpret the raw RFID data to extract accurate information. The proposed algorithms determine the location of an RFID tag by evaluating its possibility of presence and absence. To evaluate the performance of the proposed algorithms, numerical experiments are conducted using the data observed in the outpatient eye clinic. Experiments results showed that the proposed algorithms outperform existing static smoothing method in terms of minimizing both false positives and false negatives. Furthermore, the proposed algorithms are applied to a set of simulated data to show the robustness of the proposed algorithms at various levels of RFID reader reliability.

  12. Traffic Pattern Detection Using the Hough Transformation for Anomaly Detection to Improve Maritime Domain Awareness

    DTIC Science & Technology

    2013-12-01

    Programming code in the Python language used in AIS data preprocessing is contained in Appendix A. The MATLAB programming code used to apply the Hough...described in Chapter III is applied to archived AIS data in this chapter. The implementation of the method, including programming techniques used, is...is contained in the second. To provide a proof of concept for the algorithm described in Chapter III, the PYTHON programming language was used for

  13. Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil

    2010-01-01

    We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach

  14. EAGLE Monitors by Collecting Facts and Generating Obligations

    NASA Technical Reports Server (NTRS)

    Barrnger, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework, called EAGLE, that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. A monitor for an EAGLE formula checks if a finite trace of states satisfies the given formula. We present, in details, an algorithm for the synthesis of monitors for EAGLE. The algorithm is implemented as a Java application and involves novel techniques for rule definition, manipulation and execution. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace of states. Our initial experiments have been successful as EAGLE detected a previously unknown bug while testing a planetary rover controller.

  15. Ten Years of Cloud Optical and Microphysical Retrievals from MODIS

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; King, Michael D.; Wind, Galina; Hubanks, Paul; Arnold, G. Thomas; Amarasinghe, Nandana

    2010-01-01

    The MODIS cloud optical properties algorithm (MOD06/MYD06 for Terra and Aqua MODIS, respectively) has undergone extensive improvements and enhancements since the launch of Terra. These changes have included: improvements in the cloud thermodynamic phase algorithm; substantial changes in the ice cloud light scattering look up tables (LUTs); a clear-sky restoral algorithm for flagging heavy aerosol and sunglint; greatly improved spectral surface albedo maps, including the spectral albedo of snow by ecosystem; inclusion of pixel-level uncertainty estimates for cloud optical thickness, effective radius, and water path derived for three error sources that includes the sensitivity of the retrievals to solar and viewing geometries. To improve overall retrieval quality, we have also implemented cloud edge removal and partly cloudy detection (using MOD35 cloud mask 250m tests), added a supplementary cloud optical thickness and effective radius algorithm over snow and sea ice surfaces and over the ocean, which enables comparison with the "standard" 2.1 11m effective radius retrieval, and added a multi-layer cloud detection algorithm. We will discuss the status of the MOD06 algorithm and show examples of pixellevel (Level-2) cloud retrievals for selected data granules, as well as gridded (Level-3) statistics, notably monthly means and histograms (lD and 2D, with the latter giving correlations between cloud optical thickness and effective radius, and other cloud product pairs).

  16. Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms: VISCERAL Anatomy Benchmarks.

    PubMed

    Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan

    2016-11-01

    Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.

  17. Network Exploration and Vulnerability Assessment Using a Combined Blackbox and Whitebox Analysis Approach

    DTIC Science & Technology

    2010-03-01

    Employ NetFlow on Edge Router ......................................... 45 E. IMPLEMENT AN INTEGRATED VULNERABILITY ASSESSMENT. 48 1. Conduct...45 Figure 18. Netflow Information on Unauthorized Connections ............................ 46 Figure 19. Algorithm for Detecting...indicating that an attack has being initiated from this port. Figure 17. Information on Traffic Generated by Suspicious Host 3. Employ NetFlow

  18. System Framework for a Multi-Band, Multi-Mode Software Defined Radio

    DTIC Science & Technology

    2014-06-01

    detection, while the VITA Radio Transport ( VRT ) protocol over Gigabit Ethernet (GIGE) is implemented for the data interface. In addition to the SoC...CTRL VGA CTRL C2 GPP C2 CORE SW ARM0 RX SYN CTRL PL MEMORY MAP DR CTRL GENERIC INTERRUPT CONTROLLER DR GPP VITERBI ALGORITHM & VRT INTERFACE ARM1

  19. Faint Debris Detection by Particle Based Track-Before-Detect Method

    NASA Astrophysics Data System (ADS)

    Uetsuhara, M.; Ikoma, N.

    2014-09-01

    This study proposes a particle method to detect faint debris, which is hardly seen in single frame, from an image sequence based on the concept of track-before-detect (TBD). The most widely used detection method is detect-before-track (DBT), which firstly detects signals of targets from single frame by distinguishing difference of intensity between foreground and background then associate the signals for each target between frames. DBT is capable of tracking bright targets but limited. DBT is necessary to consider presence of false signals and is difficult to recover from false association. On the other hand, TBD methods try to track targets without explicitly detecting the signals followed by evaluation of goodness of each track and obtaining detection results. TBD has an advantage over DBT in detecting weak signals around background level in single frame. However, conventional TBD methods for debris detection apply brute-force search over candidate tracks then manually select true one from the candidates. To reduce those significant drawbacks of brute-force search and not-fully automated process, this study proposes a faint debris detection algorithm by a particle based TBD method consisting of sequential update of target state and heuristic search of initial state. The state consists of position, velocity direction and magnitude, and size of debris over the image at a single frame. The sequential update process is implemented by a particle filter (PF). PF is an optimal filtering technique that requires initial distribution of target state as a prior knowledge. An evolutional algorithm (EA) is utilized to search the initial distribution. The EA iteratively applies propagation and likelihood evaluation of particles for the same image sequences and resulting set of particles is used as an initial distribution of PF. This paper describes the algorithm of the proposed faint debris detection method. The algorithm demonstrates performance on image sequences acquired during observation campaigns dedicated to GEO breakup fragments, which would contain a sufficient number of faint debris images. The results indicate the proposed method is capable of tracking faint debris with moderate computational costs at operational level.

  20. The detection of flaws in austenitic welds using the decomposition of the time-reversal operator

    NASA Astrophysics Data System (ADS)

    Cunningham, Laura J.; Mulholland, Anthony J.; Tant, Katherine M. M.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin

    2016-04-01

    The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm.

  1. An extended Kalman filter for mouse tracking.

    PubMed

    Choi, Hongjun; Kim, Mingi; Lee, Onseok

    2018-05-19

    Animal tracking is an important tool for observing behavior, which is useful in various research areas. Animal specimens can be tracked using dynamic models and observation models that require several types of data. Tracking mouse has several barriers due to the physical characteristics of the mouse, their unpredictable movement, and cluttered environments. Therefore, we propose a reliable method that uses a detection stage and a tracking stage to successfully track mouse. The detection stage detects the surface area of the mouse skin, and the tracking stage implements an extended Kalman filter to estimate the state variables of a nonlinear model. The changes in the overall shape of the mouse are tracked using an oval-shaped tracking model to estimate the parameters for the ellipse. An experiment is conducted to demonstrate the performance of the proposed tracking algorithm using six video images showing various types of movement, and the ground truth values for synthetic images are compared to the values generated by the tracking algorithm. A conventional manual tracking method is also applied to compare across eight experimenters. Furthermore, the effectiveness of the proposed tracking method is also demonstrated by applying the tracking algorithm with actual images of mouse. Graphical abstract.

  2. The detection of flaws in austenitic welds using the decomposition of the time-reversal operator

    PubMed Central

    Cunningham, Laura J.; Mulholland, Anthony J.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin

    2016-01-01

    The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm. PMID:27274683

  3. Implementing a C++ Version of the Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.

    2015-12-01

    The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.

  4. SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, S; Dolly, S; Cai, B

    Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less

  5. Maximum-likelihood block detection of noncoherent continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1993-01-01

    This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.

  6. Automated system for analyzing the activity of individual neurons

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Johnson, Kenneth O.; Menkes, Alex M.; Diamond, Steve D.; Oshaughnessy, David M.

    1993-01-01

    This paper presents a signal processing system that: (1) provides an efficient and reliable instrument for investigating the activity of neuronal assemblies in the brain; and (2) demonstrates the feasibility of generating the command signals of prostheses using the activity of relevant neurons in disabled subjects. The system operates online, in a fully automated manner and can recognize the transient waveforms of several neurons in extracellular neurophysiological recordings. Optimal algorithms for detection, classification, and resolution of overlapping waveforms are developed and evaluated. Full automation is made possible by an algorithm that can set appropriate decision thresholds and an algorithm that can generate templates on-line. The system is implemented with a fast IBM PC compatible processor board that allows on-line operation.

  7. A robust color image watermarking algorithm against rotation attacks

    NASA Astrophysics Data System (ADS)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  8. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    PubMed Central

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe

    2017-01-01

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788

  9. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    PubMed

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  10. Review of Current Data Exchange Practices: Providing Descriptive Data to Assist with Building Operations Decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livingood, W.; Stein, J.; Considine, T.

    Retailers who participate in the U.S. Department of Energy Commercial Building Energy Alliances (CBEA) identified the need to enhance communication standards. The means are available to collect massive numbers of buildings operational data, but CBEA members have difficulty transforming the data into usable information and energy-saving actions. Implementing algorithms for automated fault detection and diagnostics and linking building operational data to computerized maintenance management systems are important steps in the right direction, but have limited scalability for large building portfolios because the algorithms must be configured for each building.

  11. Determining navigability of terrain using point cloud data.

    PubMed

    Cockrell, Stephanie; Lee, Gregory; Newman, Wyatt

    2013-06-01

    This paper presents an algorithm to identify features of the navigation surface in front of a wheeled robot. Recent advances in mobile robotics have brought about the development of smart wheelchairs to assist disabled people, allowing them to be more independent. These robots have a human occupant and operate in real environments where they must be able to detect hazards like holes, stairs, or obstacles. Furthermore, to ensure safe navigation, wheelchairs often need to locate and navigate on ramps. The algorithm is implemented on data from a Kinect and can effectively identify these features, increasing occupant safety and allowing for a smoother ride.

  12. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  13. Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view

    NASA Astrophysics Data System (ADS)

    Cao, Tam P.; Deng, Guang; Elton, Darrell

    2009-02-01

    In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.

  14. Sign Language Recognition System using Neural Network for Digital Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Vargas, Lorena P.; Barba, Leiner; Torres, C. O.; Mattos, L.

    2011-01-01

    This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.

  15. Audiovisual focus of attention and its application to Ultra High Definition video compression

    NASA Astrophysics Data System (ADS)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  16. Unified framework for triaxial accelerometer-based fall event detection and classification using cumulants and hierarchical decision tree classifier.

    PubMed

    Kambhampati, Satya Samyukta; Singh, Vishal; Manikandan, M Sabarimalai; Ramkumar, Barathram

    2015-08-01

    In this Letter, the authors present a unified framework for fall event detection and classification using the cumulants extracted from the acceleration (ACC) signals acquired using a single waist-mounted triaxial accelerometer. The main objective of this Letter is to find suitable representative cumulants and classifiers in effectively detecting and classifying different types of fall and non-fall events. It was discovered that the first level of the proposed hierarchical decision tree algorithm implements fall detection using fifth-order cumulants and support vector machine (SVM) classifier. In the second level, the fall event classification algorithm uses the fifth-order cumulants and SVM. Finally, human activity classification is performed using the second-order cumulants and SVM. The detection and classification results are compared with those of the decision tree, naive Bayes, multilayer perceptron and SVM classifiers with different types of time-domain features including the second-, third-, fourth- and fifth-order cumulants and the signal magnitude vector and signal magnitude area. The experimental results demonstrate that the second- and fifth-order cumulant features and SVM classifier can achieve optimal detection and classification rates of above 95%, as well as the lowest false alarm rate of 1.03%.

  17. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  18. Tools for Accurate and Efficient Analysis of Complex Evolutionary Mechanisms in Microbial Genomes. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakhleh, Luay

    I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbialmore » genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.« less

  19. A novel angle computation and calibration algorithm of bio-inspired sky-light polarization navigation sensor.

    PubMed

    Xian, Zhiwen; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Cao, Juliang; Wang, Yujie; Ma, Tao

    2014-09-15

    Navigation plays a vital role in our daily life. As traditional and commonly used navigation technologies, Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) can provide accurate location information, but suffer from the accumulative error of inertial sensors and cannot be used in a satellite denied environment. The remarkable navigation ability of animals shows that the pattern of the polarization sky can be used for navigation. A bio-inspired POLarization Navigation Sensor (POLNS) is constructed to detect the polarization of skylight. Contrary to the previous approach, we utilize all the outputs of POLNS to compute input polarization angle, based on Least Squares, which provides optimal angle estimation. In addition, a new sensor calibration algorithm is presented, in which the installation angle errors and sensor biases are taken into consideration. Derivation and implementation of our calibration algorithm are discussed in detail. To evaluate the performance of our algorithms, simulation and real data test are done to compare our algorithms with several exiting algorithms. Comparison results indicate that our algorithms are superior to the others and are more feasible and effective in practice.

  20. Efficient audio signal processing for embedded systems

    NASA Astrophysics Data System (ADS)

    Chiu, Leung Kin

    As mobile platforms continue to pack on more computational power, electronics manufacturers start to differentiate their products by enhancing the audio features. However, consumers also demand smaller devices that could operate for longer time, hence imposing design constraints. In this research, we investigate two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound ”richer" and "fuller." Piezoelectric speakers have a small form factor but exhibit poor response in the low-frequency region. In the algorithm, we combine psychoacoustic bass extension and dynamic range compression to improve the perceived bass coming out from the tiny speakers. We also developed an audio energy reduction algorithm for loudspeaker power management. The perceptually transparent algorithm extends the battery life of mobile devices and prevents thermal damage in speakers. This method is similar to audio compression algorithms, which encode audio signals in such a ways that the compression artifacts are not easily perceivable. Instead of reducing the storage space, however, we suppress the audio contents that are below the hearing threshold, therefore reducing the signal energy. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The system is an example of an analog-to-information converter. The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine learning algorithm AdaBoost is used to select the most relevant features for a particular sound detection application. In this classifier architecture, we combine simple "base" analog classifiers to form a strong one. We also designed the circuits to implement the AdaBoost-based analog classifier.

Top