Sample records for zero-crossing detection algorithm

  1. Mobile/android application for QRS detection using zero cross method

    NASA Astrophysics Data System (ADS)

    Rizqyawan, M. I.; Simbolon, A. I.; Suhendra, M. A.; Amri, M. F.; Kusumandari, D. E.

    2018-03-01

    In automatic ECG signal processing, one of the main topics of research is QRS complex detection. Detecting correct QRS complex or R peak is important since it is used to measure several other ECG metrics. One of the robust methods for QRS detection is Zero Cross method. This method uses an addition of high-frequency signal and zero crossing count to detect QRS complex which has a low-frequency oscillation. This paper presents an application of QRS detection using Zero Cross algorithm in the Android-based system. The performance of the algorithm in the mobile environment is measured. The result shows that this method is suitable for real-time QRS detection in a mobile application.

  2. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  3. A Novel Zero Velocity Interval Detection Algorithm for Self-Contained Pedestrian Navigation System with Inertial Sensors

    PubMed Central

    Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan

    2016-01-01

    Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266

  4. Zero-block mode decision algorithm for H.264/AVC.

    PubMed

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  5. Collision detection for spacecraft proximity operations

    NASA Technical Reports Server (NTRS)

    Vaughan, Robin M.; Bergmann, Edward V.; Walker, Bruce K.

    1991-01-01

    A new collision detection algorithm has been developed for use when two spacecraft are operating in the same vicinity. The two spacecraft are modeled as unions of convex polyhedra, where the resulting polyhedron many be either convex or nonconvex. The relative motion of the two spacecraft is assumed to be such that one vehicle is moving with constant linear and angular velocity with respect to the other. Contacts between the vertices, faces, and edges of the polyhedra representing the two spacecraft are shown to occur when the value of one or more of a set of functions is zero. The collision detection algorithm is then formulated as a search for the zeros (roots) of these functions. Special properties of the functions for the assumed relative trajectory are exploited to expedite the zero search. The new algorithm is the first algorithm that can solve the collision detection problem exactly for relative motion with constant angular velocity. This is a significant improvement over models of rotational motion used in previous collision detection algorithms.

  6. A new automated quantification algorithm for the detection and evaluation of focal liver lesions with contrast-enhanced ultrasound.

    PubMed

    Gatos, Ilias; Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Theotokas, Ioannis; Zoumpoulis, Pavlos; Hazle, John D; Kagadis, George C

    2015-07-01

    Detect and classify focal liver lesions (FLLs) from contrast-enhanced ultrasound (CEUS) imaging by means of an automated quantification algorithm. The proposed algorithm employs a sophisticated segmentation method to detect and contour focal lesions from 52 CEUS video sequences (30 benign and 22 malignant). Lesion detection involves wavelet transform zero crossings utilization as an initialization step to the Markov random field model toward the lesion contour extraction. After FLL detection across frames, time intensity curve (TIC) is computed which provides the contrast agents' behavior at all vascular phases with respect to adjacent parenchyma for each patient. From each TIC, eight features were automatically calculated and employed into the support vector machines (SVMs) classification algorithm in the design of the image analysis model. With regard to FLLs detection accuracy, all lesions detected had an average overlap value of 0.89 ± 0.16 with manual segmentations for all CEUS frame-subsets included in the study. Highest classification accuracy from the SVM model was 90.3%, misdiagnosing three benign and two malignant FLLs with sensitivity and specificity values of 93.1% and 86.9%, respectively. The proposed quantification system that employs FLLs detection and classification algorithms may be of value to physicians as a second opinion tool for avoiding unnecessary invasive procedures.

  7. PRISM: A Practical Mealtime Imaging Stereo Matcher

    NASA Astrophysics Data System (ADS)

    Nishihara, H. K.

    1984-02-01

    A fast stereo-matching algorithm designed to operate in the presence of noise is described. The algorithm has its roots in the zero-crossing theory of Marr and Poggio but does not explicitly match zero-crossing contours. While these contours are for the most part stably tied to fixed surface locations, some fraction is always perturbed significantly by system noise. Zero-crossing contour based matching algorithms tend to I- very sensitive to these local distortions and ar, prevented from operating well on signals with moderate noise levels even though a substantial amount of information may still be present. The dual representation ¬â€?regions of constant sign in the V2G convolution persist much further into the noise than does the local geometry of the zero-crossing contours that delimit them. The PRISM system was designed to test this approach. The initial design task of the implementation has been to rapidly detect obstacles in a robotics work space and determine their rough extents and heights. In this case speed and reliability are important but precision is less critical. The system uses a pair of inexpensive vidicon cameras mounted above the workspace of a PUMA robot manipulator. The digitized video signals are fed to a high speed digital convolver that applies a 322 VG operator to the images at a 106 pixel per second rate. Matching is accomplished in software on a lisp machine with individual near/far tests taking less than i3luth of a second. A 36 by 26 matrix of absolute height measurements - in mm - over a 100 pixel disparity range is produced in 30 seconds from image acquisition to final output. Three scales of resolution are used in a coarse guides fine search. Acknowledgment: This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of 'Technology Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505 and in part by National Science Foundation Grant 79-23110MCS.

  8. Research on fully distributed optical fiber sensing security system localization algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen

    2013-12-01

    A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.

  9. Simple tunnel diode circuit for accurate zero crossing timing

    NASA Technical Reports Server (NTRS)

    Metz, A. J.

    1969-01-01

    Tunnel diode circuit, capable of timing the zero crossing point of bipolar pulses, provides effective design for a fast crossing detector. It combines a nonlinear load line with the diode to detect the zero crossing of a wide range of input waveshapes.

  10. A comparison of digital zero-crossing and charge-comparison methods for neutron/γ-ray discrimination with liquid scintillation detectors

    NASA Astrophysics Data System (ADS)

    Nakhostin, M.

    2015-10-01

    In this paper, we have compared the performances of the digital zero-crossing and charge-comparison methods for n/γ discrimination with liquid scintillation detectors at low light outputs. The measurements were performed with a 2″×2″ cylindrical liquid scintillation detector of type BC501A whose outputs were sampled by means of a fast waveform digitizer with 10-bit resolution, 4 GS/s sampling rate and one volt input range. Different light output ranges were measured by operating the photomultiplier tube at different voltages and a new recursive algorithm was developed to implement the digital zero-crossing method. The results of our study demonstrate the superior performance of the digital zero-crossing method at low light outputs when a large dynamic range is measured. However, when the input range of the digitizer is used to measure a narrow range of light outputs, the charge-comparison method slightly outperforms the zero-crossing method. The results are discussed in regard to the effects of the quantization noise and the noise filtration performance of the zero-crossing filter.

  11. Detection of explosives, nerve agents, and illicit substances by zero-energy electron attachment

    NASA Technical Reports Server (NTRS)

    Chutjian, A.; Darrach, M. R.

    2000-01-01

    The Reversal Electron Attachment Detection (READ) method, developed at JPL/Caltech, has been used to detect a variety of substances which have electron-attachment resonances at low and intermediate electron energies. In the case of zero-energy resonances, the cross section (hence attachment probability and instrument sensitivity) is mediated by the so-called s-wave phenomenon, in which the cross sections vary as the inverse of the electron velocity. Hence this is, in the limit of zero electron energy or velocity, one of the rare cases in atomic and molecular physics where one carries out detection via infinite cross sections.

  12. [An improved algorithm for electrohysterogram envelope extraction].

    PubMed

    Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia

    2017-02-01

    Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.

  13. Design and realization of a new agorithm of calculating the absolute positon angle based on the incremental encoder

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Yang, Yong-qing; Li, Zhi-guo; Han, Jun-feng; Wei, Yu; Jing, Feng

    2018-02-01

    Aiming at the shortage of the incremental encoder with simple process to change along the count "in the presence of repeatability and anti disturbance ability, combined with its application in a large project in the country, designed an electromechanical switch for generating zero, zero crossing signal. A mechanical zero electric and zero coordinate transformation model is given to meet the path optimality, single, fast and accurate requirements of adaptive fast change algorithm, the proposed algorithm can effectively solve the contradiction between the accuracy and the change of the time change. A test platform is built to verify the effectiveness and robustness of the proposed algorithm. The experimental data show that the effect of the algorithm accuracy is not influenced by the change of the speed of change, change the error of only 0.0013. Meet too fast, the change of system accuracy, and repeated experiments show that this algorithm has high robustness.

  14. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  15. A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors

    PubMed Central

    Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun

    2015-01-01

    This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086

  16. A Robust Zero-Watermarking Algorithm for Audio

    NASA Astrophysics Data System (ADS)

    Chen, Ning; Zhu, Jie

    2007-12-01

    In traditional watermarking algorithms, the insertion of watermark into the host signal inevitably introduces some perceptible quality degradation. Another problem is the inherent conflict between imperceptibility and robustness. Zero-watermarking technique can solve these problems successfully. Instead of embedding watermark, the zero-watermarking technique extracts some essential characteristics from the host signal and uses them for watermark detection. However, most of the available zero-watermarking schemes are designed for still image and their robustness is not satisfactory. In this paper, an efficient and robust zero-watermarking technique for audio signal is presented. The multiresolution characteristic of discrete wavelet transform (DWT), the energy compression characteristic of discrete cosine transform (DCT), and the Gaussian noise suppression property of higher-order cumulant are combined to extract essential features from the host audio signal and they are then used for watermark recovery. Simulation results demonstrate the effectiveness of our scheme in terms of inaudibility, detection reliability, and robustness.

  17. Infrared small target detection based on directional zero-crossing measure

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangyue; Ding, Qinghai; Luo, Haibo; Hui, Bin; Chang, Zheng; Zhang, Junchao

    2017-12-01

    Infrared small target detection under complex background and low signal-to-clutter ratio (SCR) condition is of great significance to the development on precision guidance and infrared surveillance. In order to detect targets precisely and extract targets from intricate clutters effectively, a detection method based on zero-crossing saliency (ZCS) map is proposed. The original map is first decomposed into different first-order directional derivative (FODD) maps by using FODD filters. Then the ZCS map is obtained by fusing all directional zero-crossing points. At last, an adaptive threshold is adopted to segment targets from the ZCS map. Experimental results on a series of images show that our method is effective and robust for detection under complex backgrounds. Moreover, compared with other five state-of-the-art methods, our method achieves better performance in terms of detection rate, SCR gain and background suppression factor.

  18. Improvement of a picking algorithm real-time P-wave detection by kurtosis

    NASA Astrophysics Data System (ADS)

    Ishida, H.; Yamada, M.

    2016-12-01

    Earthquake early warning (EEW) requires fast and accurate P-wave detection. The current EEW system in Japan uses the STA/LTAalgorithm (Allen, 1978) to detect P-wave arrival.However, some stations did not trigger during the 2011 Great Tohoku Earthquake due to the emergent onset. In addition, accuracy of the P-wave detection is very important: on August 1, 2016, the EEW issued a false alarm with M9 in Tokyo region due to a thunder noise.To solve these problems, we use a P-wave detection method using kurtosis statistics. It detects the change of statistic distribution of the waveform amplitude. This method was recently developed (Saragiotis et al., 2002) and used for off-line analysis such as making seismic catalogs. To apply this method for EEW, we need to remove an acausal calculation and enable a real-time processing. Here, we propose a real-time P-wave detection method using kurtosis statistics with a noise filter.To avoid false triggering by a noise, we incorporated a simple filter to classify seismic signal and noise. Following Kong et al. (2016), we used the interquartilerange and zero cross rate for the classification. The interquartile range is an amplitude measure that is equal to the middle 50% of amplitude in a certain time window. The zero cross rate is a simple frequency measure that counts the number of times that the signal crosses baseline zero. A discriminant function including these measures was constructed by the linear discriminant analysis.To test this kurtosis method, we used strong motion records for 62 earthquakes between April, 2005 and July, 2015, which recorded the seismic intensity greater equal to 6 lower in the JMA intensity scale. The records with hypocentral distance < 200km were used for the analysis. An attached figure shows the error of P-wave detection speed for STA/LTA and kurtosis methods against manual picks. It shows that the median error is 0.13 sec and 0.035 sec for STA/LTA and kurtosis method. The kurtosis method tends to be more sensitive to small changes in amplitude.Our approach will contribute to improve the accuracy of source location determination of earthquakes and improve the shaking intensity estimation for an earthquake early warning.

  19. Design of a Wireless Sensor System with the Algorithms of Heart Rate and Agility Index for Athlete Evaluation

    PubMed Central

    Li, Meina; Kim, Youn Tae

    2017-01-01

    Athlete evaluation systems can effectively monitor daily training and boost performance to reduce injuries. Conventional heart-rate measurement systems can be easily affected by artifact movement, especially in the case of athletes. Significant noise can be generated owing to high-intensity activities. To improve the comfort for athletes and the accuracy of monitoring, we have proposed to combine robust heart rate and agility index monitoring algorithms into a small, light, and single node. A band-pass-filter-based R-wave detection algorithm was developed. The agility index was calculated by preprocessing with band-pass filtering and employing the zero-crossing detection method. The evaluation was conducted under both laboratory and field environments to verify the accuracy and reliability of the algorithm. The heart rate and agility index measurements can be wirelessly transmitted to a personal computer in real time by the ZigBee telecommunication system. The results show that the error rate of measurement of the heart rate is within 2%, which is comparable with that of the traditional wired measurement method. The sensitivity of the agility index, which could be distinguished as the activity speed, changed slightly. Thus, we confirmed that the developed algorithm could be used in an effective and safe exercise-evaluation system for athletes. PMID:29039763

  20. Adiabatic Quantum Search in Open Systems.

    PubMed

    Wild, Dominik S; Gopalakrishnan, Sarang; Knap, Michael; Yao, Norman Y; Lukin, Mikhail D

    2016-10-07

    Adiabatic quantum algorithms represent a promising approach to universal quantum computation. In isolated systems, a key limitation to such algorithms is the presence of avoided level crossings, where gaps become extremely small. In open quantum systems, the fundamental robustness of adiabatic algorithms remains unresolved. Here, we study the dynamics near an avoided level crossing associated with the adiabatic quantum search algorithm, when the system is coupled to a generic environment. At zero temperature, we find that the algorithm remains scalable provided the noise spectral density of the environment decays sufficiently fast at low frequencies. By contrast, higher order scattering processes render the algorithm inefficient at any finite temperature regardless of the spectral density, implying that no quantum speedup can be achieved. Extensions and implications for other adiabatic quantum algorithms will be discussed.

  1. Improving depth maps of plants by using a set of five cameras

    NASA Astrophysics Data System (ADS)

    Kaczmarek, Adam L.

    2015-03-01

    Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.

  2. Multiplexed wavelet transform technique for detection of microcalcification in digitized mammograms.

    PubMed

    Mini, M G; Devassia, V P; Thomas, Tessamma

    2004-12-01

    Wavelet transform (WT) is a potential tool for the detection of microcalcifications, an early sign of breast cancer. This article describes the implementation and evaluates the performance of two novel WT-based schemes for the automatic detection of clustered microcalcifications in digitized mammograms. Employing a one-dimensional WT technique that utilizes the pseudo-periodicity property of image sequences, the proposed algorithms achieve high detection efficiency and low processing memory requirements. The detection is achieved from the parent-child relationship between the zero-crossings [Marr-Hildreth (M-H) detector] /local extrema (Canny detector) of the WT coefficients at different levels of decomposition. The detected pixels are weighted before the inverse transform is computed, and they are segmented by simple global gray level thresholding. Both detectors produce 95% detection sensitivity, even though there are more false positives for the M-H detector. The M-H detector preserves the shape information and provides better detection sensitivity for mammograms containing widely distributed calcifications.

  3. Classification of ECG signal with Support Vector Machine Method for Arrhythmia Detection

    NASA Astrophysics Data System (ADS)

    Turnip, Arjon; Ilham Rizqywan, M.; Kusumandari, Dwi E.; Turnip, Mardi; Sihombing, Poltak

    2018-03-01

    An electrocardiogram is a potential bioelectric record that occurs as a result of cardiac activity. QRS Detection with zero crossing calculation is one method that can precisely determine peak R of QRS wave as part of arrhythmia detection. In this paper, two experimental scheme (2 minutes duration with different activities: relaxed and, typing) were conducted. From the two experiments it were obtained: accuracy, sensitivity, and positive predictivity about 100% each for the first experiment and about 79%, 93%, 83% for the second experiment, respectively. Furthermore, the feature set of MIT-BIH arrhythmia using the support vector machine (SVM) method on the WEKA software is evaluated. By combining the available attributes on the WEKA algorithm, the result is constant since all classes of SVM goes to the normal class with average 88.49% accuracy.

  4. Asynchronous timing and Doppler recovery in DSP based DPSK modems for fixed and mobile satellite applications

    NASA Astrophysics Data System (ADS)

    Koblents, B.; Belanger, M.; Woods, D.; McLane, P. J.

    While conventional analog modems employ some kind of clock wave regenerator circuit for synchronous timing recovery, in sampled modem receivers the timing is recovered asynchronously to the incoming data stream, with no adjustment being made to the input sampling rate. All timing corrections are accomplished by digital operations on the sampled data stream, and timing recovery is asynchronous with the uncontrolled, input A/D system. A good timing error measurement algorithm is a zero crossing tracker proposed by Gardner. Digital, speech rate (2400 - 4800 bps) M-PSK modem receivers employing Gardner's zero crossing tracker were implemented and tested and found to achieve BER performance very close to theoretical values on the AWGN channel. Nyguist pulse shaped modem systems with excess bandwidth factors ranging from 100 to 60 percent were considered. We can show that for any symmetric M-PSK signal set Gardner's NDA algorithm is free of pattern jitter for any carrier phase offset for rectangular pulses and for Nyquist pulses having 100 percent excess bandwidth. Also, the Nyquist pulse shaped system is studied on the mobile satellite channel, where Doppler shifts and multipath fading degrade the pi/4-DQPSK signal. Two simple modifications to Gardner's zero crossing tracker enable it to remain useful in the presence of multipath fading.

  5. Asynchronous timing and Doppler recovery in DSP based DPSK modems for fixed and mobile satellite applications

    NASA Technical Reports Server (NTRS)

    Koblents, B.; Belanger, M.; Woods, D.; Mclane, P. J.

    1993-01-01

    While conventional analog modems employ some kind of clock wave regenerator circuit for synchronous timing recovery, in sampled modem receivers the timing is recovered asynchronously to the incoming data stream, with no adjustment being made to the input sampling rate. All timing corrections are accomplished by digital operations on the sampled data stream, and timing recovery is asynchronous with the uncontrolled, input A/D system. A good timing error measurement algorithm is a zero crossing tracker proposed by Gardner. Digital, speech rate (2400 - 4800 bps) M-PSK modem receivers employing Gardner's zero crossing tracker were implemented and tested and found to achieve BER performance very close to theoretical values on the AWGN channel. Nyguist pulse shaped modem systems with excess bandwidth factors ranging from 100 to 60 percent were considered. We can show that for any symmetric M-PSK signal set Gardner's NDA algorithm is free of pattern jitter for any carrier phase offset for rectangular pulses and for Nyquist pulses having 100 percent excess bandwidth. Also, the Nyquist pulse shaped system is studied on the mobile satellite channel, where Doppler shifts and multipath fading degrade the pi/4-DQPSK signal. Two simple modifications to Gardner's zero crossing tracker enable it to remain useful in the presence of multipath fading.

  6. Pseudorange Measurement Method Based on AIS Signals.

    PubMed

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-05-22

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system.

  7. Pseudorange Measurement Method Based on AIS Signals

    PubMed Central

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-01-01

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system. PMID:28531153

  8. Study on improved Ip-iq APF control algorithm and its application in micro grid

    NASA Astrophysics Data System (ADS)

    Xie, Xifeng; Shi, Hua; Deng, Haiyingv

    2018-01-01

    In order to enhance the tracking velocity and accuracy of harmonic detection by ip-iq algorithm, a novel ip-iq control algorithm based on the Instantaneous reactive power theory is presented, the improved algorithm adds the lead correction link to adjust the zero point of the detection system, the Fuzzy Self-Tuning Adaptive PI control is introduced to dynamically adjust the DC-link Voltage, which meets the requirement of the harmonic compensation of the micro grid. Simulation and experimental results verify the proposed method is feasible and effective in micro grid.

  9. A novel fast phase correlation algorithm for peak wavelength detection of Fiber Bragg Grating sensors.

    PubMed

    Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F

    2014-03-24

    Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.

  10. Automatic detection of zebra crossings from mobile LiDAR data

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.

    2015-07-01

    An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.

  11. Two stage algorithm vs commonly used approaches for the suspect screening of complex environmental samples analyzed via liquid chromatography high resolution time of flight mass spectroscopy: A test study.

    PubMed

    Samanipour, Saer; Baz-Lomba, Jose A; Alygizakis, Nikiforos A; Reid, Malcolm J; Thomaidis, Nikolaos S; Thomas, Kevin V

    2017-06-09

    LC-HR-QTOF-MS recently has become a commonly used approach for the analysis of complex samples. However, identification of small organic molecules in complex samples with the highest level of confidence is a challenging task. Here we report on the implementation of a two stage algorithm for LC-HR-QTOF-MS datasets. We compared the performances of the two stage algorithm, implemented via NIVA_MZ_Analyzer™, with two commonly used approaches (i.e. feature detection and XIC peak picking, implemented via UNIFI by Waters and TASQ by Bruker, respectively) for the suspect analysis of four influent wastewater samples. We first evaluated the cross platform compatibility of LC-HR-QTOF-MS datasets generated via instruments from two different manufacturers (i.e. Waters and Bruker). Our data showed that with an appropriate spectral weighting function the spectra recorded by the two tested instruments are comparable for our analytes. As a consequence, we were able to perform full spectral comparison between the data generated via the two studied instruments. Four extracts of wastewater influent were analyzed for 89 analytes, thus 356 detection cases. The analytes were divided into 158 detection cases of artificial suspect analytes (i.e. verified by target analysis) and 198 true suspects. The two stage algorithm resulted in a zero rate of false positive detection, based on the artificial suspect analytes while producing a rate of false negative detection of 0.12. For the conventional approaches, the rates of false positive detection varied between 0.06 for UNIFI and 0.15 for TASQ. The rates of false negative detection for these methods ranged between 0.07 for TASQ and 0.09 for UNIFI. The effect of background signal complexity on the two stage algorithm was evaluated through the generation of a synthetic signal. We further discuss the boundaries of applicability of the two stage algorithm. The importance of background knowledge and experience in evaluating the reliability of results during the suspect screening was evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Identification of bearing faults using time domain zero-crossings

    NASA Astrophysics Data System (ADS)

    William, P. E.; Hoffman, M. W.

    2011-11-01

    In this paper, zero-crossing characteristic features are employed for early detection and identification of single point bearing defects in rotating machinery. As a result of bearing defects, characteristic defect frequencies appear in the machine vibration signal, normally requiring spectral analysis or envelope analysis to identify the defect type. Zero-crossing features are extracted directly from the time domain vibration signal using only the duration between successive zero-crossing intervals and do not require estimation of the rotational frequency. The features are a time domain representation of the composite vibration signature in the spectral domain. Features are normalized by the length of the observation window and classification is performed using a multilayer feedforward neural network. The model was evaluated on vibration data recorded using an accelerometer mounted on an induction motor housing subjected to a number of single point defects with different severity levels.

  13. Design of two-dimensional zero reference codes with cross-entropy method.

    PubMed

    Chen, Jung-Chieh; Wen, Chao-Kai

    2010-06-20

    We present a cross-entropy (CE)-based method for the design of optimum two-dimensional (2D) zero reference codes (ZRCs) in order to generate a zero reference signal for a grating measurement system and achieve absolute position, a coordinate origin, or a machine home position. In the absence of diffraction effects, the 2D ZRC design problem is known as the autocorrelation approximation. Based on the properties of the autocorrelation function, the design of the 2D ZRC is first formulated as a particular combination optimization problem. The CE method is then applied to search for an optimal 2D ZRC and thus obtain the desirable zero reference signal. Computer simulation results indicate that there are 15.38% and 14.29% reductions in the second maxima value for the 16x16 grating system with n(1)=64 and the 100x100 grating system with n(1)=300, respectively, where n(1) is the number of transparent pixels, compared with those of the conventional genetic algorithm.

  14. Birefringence dispersion compensation demodulation algorithm for polarized low-coherence interferometry.

    PubMed

    Wang, Shuang; Liu, Tiegen; Jiang, Junfeng; Liu, Kun; Yin, Jinde; Wu, Fan

    2013-08-15

    A demodulation algorithm based on the birefringence dispersion characteristics for a polarized low-coherence interferometer is proposed. With the birefringence dispersion parameter taken into account, the mathematical model of the polarized low-coherence interference fringes is established and used to extract phase shift information between the measured coherence envelope center and the zero-order fringe, which eliminates the interferometric 2 π ambiguity of locating the zero-order fringe. A pressure measurement experiment using an optical fiber Fabry-Perot pressure sensor was carried out to verify the effectiveness of the proposed algorithm. The experiment result showed that the demodulation precision was 0.077 kPa in the range of 210 kPa, which was improved by 23 times compared to the traditional envelope detection method.

  15. A Fuel-Efficient Conflict Resolution Maneuver for Separation Assurance

    NASA Technical Reports Server (NTRS)

    Bowe, Aisha Ruth; Santiago, Confesor

    2012-01-01

    Automated separation assurance algorithms are envisioned to play an integral role in accommodating the forecasted increase in demand of the National Airspace System. Developing a robust, reliable, air traffic management system involves safely increasing efficiency and throughput while considering the potential impact on users. This experiment seeks to evaluate the benefit of augmenting a conflict detection and resolution algorithm to consider a fuel efficient, Zero-Delay Direct-To maneuver, when resolving a given conflict based on either minimum fuel burn or minimum delay. A total of twelve conditions were tested in a fast-time simulation conducted in three airspace regions with mixed aircraft types and light weather. Results show that inclusion of this maneuver has no appreciable effect on the ability of the algorithm to safely detect and resolve conflicts. The results further suggest that enabling the Zero-Delay Direct-To maneuver significantly increases the cumulative fuel burn savings when choosing resolution based on minimum fuel burn while marginally increasing the average delay per resolution.

  16. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  17. Low complexity feature extraction for classification of harmonic signals

    NASA Astrophysics Data System (ADS)

    William, Peter E.

    In this dissertation, feature extraction algorithms have been developed for extraction of characteristic features from harmonic signals. The common theme for all developed algorithms is the simplicity in generating a significant set of features directly from the time domain harmonic signal. The features are a time domain representation of the composite, yet sparse, harmonic signature in the spectral domain. The algorithms are adequate for low-power unattended sensors which perform sensing, feature extraction, and classification in a standalone scenario. The first algorithm generates the characteristic features using only the duration between successive zero-crossing intervals. The second algorithm estimates the harmonics' amplitudes of the harmonic structure employing a simplified least squares method without the need to estimate the true harmonic parameters of the source signal. The third algorithm, resulting from a collaborative effort with Daniel White at the DSP Lab, University of Nebraska-Lincoln, presents an analog front end approach that utilizes a multichannel analog projection and integration to extract the sparse spectral features from the analog time domain signal. Classification is performed using a multilayer feedforward neural network. Evaluation of the proposed feature extraction algorithms for classification through the processing of several acoustic and vibration data sets (including military vehicles and rotating electric machines) with comparison to spectral features shows that, for harmonic signals, time domain features are simpler to extract and provide equivalent or improved reliability over the spectral features in both the detection probabilities and false alarm rate.

  18. Threshold-adaptive canny operator based on cross-zero points

    NASA Astrophysics Data System (ADS)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  19. Validation of accelerometer wear and nonwear time classification algorithm.

    PubMed

    Choi, Leena; Liu, Zhouwen; Matthews, Charles E; Buchowski, Maciej S

    2011-02-01

    the use of movement monitors (accelerometers) for measuring physical activity (PA) in intervention and population-based studies is becoming a standard methodology for the objective measurement of sedentary and active behaviors and for the validation of subjective PA self-reports. A vital step in PA measurement is the classification of daily time into accelerometer wear and nonwear intervals using its recordings (counts) and an accelerometer-specific algorithm. the purpose of this study was to validate and improve a commonly used algorithm for classifying accelerometer wear and nonwear time intervals using objective movement data obtained in the whole-room indirect calorimeter. we conducted a validation study of a wear or nonwear automatic algorithm using data obtained from 49 adults and 76 youth wearing accelerometers during a strictly monitored 24-h stay in a room calorimeter. The accelerometer wear and nonwear time classified by the algorithm was compared with actual wearing time. Potential improvements to the algorithm were examined using the minimum classification error as an optimization target. the recommended elements in the new algorithm are as follows: 1) zero-count threshold during a nonwear time interval, 2) 90-min time window for consecutive zero or nonzero counts, and 3) allowance of 2-min interval of nonzero counts with the upstream or downstream 30-min consecutive zero-count window for detection of artifactual movements. Compared with the true wearing status, improvements to the algorithm decreased nonwear time misclassification during the waking and the 24-h periods (all P values < 0.001). the accelerometer wear or nonwear time algorithm improvements may lead to more accurate estimation of time spent in sedentary and active behaviors.

  20. The analysis of the pilot's cognitive and decision processes

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1975-01-01

    Articles are presented on pilot performance in zero-visibility precision approach, failure detection by pilots during automatic landing, experiments in pilot decision-making during simulated low visibility approaches, a multinomial maximum likelihood program, and a random search algorithm for laboratory computers. Other topics discussed include detection of system failures in multi-axis tasks and changes in pilot workload during an instrument landing.

  1. Analysis of modal behavior at frequency cross-over

    NASA Astrophysics Data System (ADS)

    Costa, Robert N., Jr.

    1994-11-01

    The existence of the mode crossing condition is detected and analyzed in the Active Control of Space Structures Model 4 (ACOSS4). The condition is studied for its contribution to the inability of previous algorithms to successfully optimize the structure and converge to a feasible solution. A new algorithm is developed to detect and correct for mode crossings. The existence of the mode crossing condition is verified in ACOSS4 and found not to have appreciably affected the solution. The structure is then successfully optimized using new analytic methods based on modal expansion. An unrelated error in the optimization algorithm previously used is verified and corrected, thereby equipping the optimization algorithm with a second analytic method for eigenvector differentiation based on Nelson's Method. The second structure is the Control of Flexible Structures (COFS). The COFS structure is successfully reproduced and an initial eigenanalysis completed.

  2. Note: Ultrasonic gas flowmeter based on optimized time-of-flight algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, X. F.; Tang, Z. A.

    2011-04-15

    A new digital signal processor based single path ultrasonic gas flowmeter is designed, constructed, and experimentally tested. To achieve high accuracy measurements, an optimized ultrasound driven method of incorporation of the amplitude modulation and the phase modulation of the transmit-receive technique is used to stimulate the transmitter. Based on the regularities among the received envelope zero-crossings, different received signal's signal-to-noise ratio situations are discriminated and optional time-of-flight algorithms are applied to take flow rate calculations. Experimental results from the dry calibration indicate that the designed flowmeter prototype can meet the zero-flow verification test requirements of the American Gas Association Reportmore » No. 9. Furthermore, the results derived from the flow calibration prove that the proposed flowmeter prototype can measure flow rate accurately in the practical experiments, and the nominal accuracies after FWME adjustment are lower than 0.8% throughout the calibration range.« less

  3. Quaternion-valued single-phase model for three-phase power system

    NASA Astrophysics Data System (ADS)

    Gou, Xiaoming; Liu, Zhiwen; Liu, Wei; Xu, Yougen; Wang, Jiabin

    2018-03-01

    In this work, a quaternion-valued model is proposed in lieu of the Clarke's α, β transformation to convert three-phase quantities to a hypercomplex single-phase signal. The concatenated signal can be used for harmonic distortion detection in three-phase power systems. In particular, the proposed model maps all the harmonic frequencies into frequencies in the quaternion domain, while the Clarke's transformation-based methods will fail to detect the zero sequence voltages. Based on the quaternion-valued model, the Fourier transform, the minimum variance distortionless response (MVDR) algorithm and the multiple signal classification (MUSIC) algorithm are presented as examples to detect harmonic distortion. Simulations are provided to demonstrate the potentials of this new modeling method.

  4. Digital pulse shape discrimination methods for n-γ separation in an EJ-301 liquid scintillation detector

    NASA Astrophysics Data System (ADS)

    Wan, Bo; Zhang, Xue-Ying; Chen, Liang; Ge, Hong-Lin; Ma, Fei; Zhang, Hong-Bin; Ju, Yong-Qin; Zhang, Yan-Bin; Li, Yan-Yan; Xu, Xiao-Wei

    2015-11-01

    A digital pulse shape discrimination system based on a programmable module NI-5772 has been established and tested with an EJ-301 liquid scintillation detector. The module was operated by running programs developed in LabVIEW, with a sampling frequency up to 1.6 GS/s. Standard gamma sources 22Na, 137Cs and 60Co were used to calibrate the EJ-301 liquid scintillation detector, and the gamma response function was obtained. Digital algorithms for the charge comparison method and zero-crossing method have been developed. The experimental results show that both digital signal processing (DSP) algorithms can discriminate neutrons from γ-rays. Moreover, the zero-crossing method shows better n-γ discrimination at 80 keVee and lower, whereas the charge comparison method gives better results at higher thresholds. In addition, the figure-of-merit (FOM) for detectors of two different dimensions were extracted at 9 energy thresholds, and it was found that the smaller detector presented better n-γ separation for fission neutrons. Supported by National Natural Science Foundation of China (91226107, 11305229) and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA03030300)

  5. Optical zero-differential pressure switch and its evaluation in a multiple pressure measuring system

    NASA Technical Reports Server (NTRS)

    Powell, J. A.

    1977-01-01

    The design of a clamped-diaphragm pressure switch is described in which diaphragm motion is detected by a simple fiber-optic displacement sensor. The switch was evaluated in a pressure measurement system where it detected the zero crossing of the differential pressure between a static test pressure and a tank pressure that was periodically ramped from near zero to fullscale gage pressure. With a ramping frequency of 1 hertz and a full-scale tank pressure of 69 N/sq cm gage (100 psig), the switch delay was as long as 2 milliseconds. Pressure measurement accuracies were 0.25 to 0.75 percent of full scale. Factors affecting switch performance are also discussed.

  6. Traffic light detection and intersection crossing using mobile computer vision

    NASA Astrophysics Data System (ADS)

    Grewei, Lynne; Lagali, Christopher

    2017-05-01

    The solution for Intersection Detection and Crossing to support the development of blindBike an assisted biking system for the visually impaired is discussed. Traffic light detection and intersection crossing are key needs in the task of biking. These problems are tackled through the use of mobile computer vision, in the form of a mobile application on an Android phone. This research builds on previous Traffic Light detection algorithms with a focus on efficiency and compatibility on a resource-limited platform. Light detection is achieved through blob detection algorithms utilizing training data to detect patterns of Red, Green and Yellow in complex real world scenarios where multiple lights may be present. Also, issues of obscurity and scale are addressed. Safe Intersection crossing in blindBike is also discussed. This module takes a conservative "assistive" technology approach. To achieve this blindBike use's not only the Android device but, an external bike cadence Bluetooth/Ant enabled sensor. Real world testing results are given and future work is discussed.

  7. A biomimetic algorithm for the improved detection of microarray features

    NASA Astrophysics Data System (ADS)

    Nicolau, Dan V., Jr.; Nicolau, Dan V.; Maini, Philip K.

    2007-02-01

    One the major difficulties of microarray technology relate to the processing of large and - importantly - error-loaded images of the dots on the chip surface. Whatever the source of these errors, those obtained in the first stage of data acquisition - segmentation - are passed down to the subsequent processes, with deleterious results. As it has been demonstrated recently that biological systems have evolved algorithms that are mathematically efficient, this contribution attempts to test an algorithm that mimics a bacterial-"patented" algorithm for the search of available space and nutrients to find, "zero-in" and eventually delimitate the features existent on the microarray surface.

  8. Assessment of Gamma-Ray-Spectra Analysis Method Utilizing the Fireworks Algorithm for Various Error Measures

    DOE PAGES

    Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.

    2018-01-01

    The analysis of measured data plays a significant role in enhancing nuclear nonproliferation mainly by inferring the presence of patterns associated with special nuclear materials. Among various types of measurements, gamma-ray spectra is the widest utilized type of data in nonproliferation applications. In this paper, a method that employs the fireworks algorithm (FWA) for analyzing gamma-ray spectra aiming at detecting gamma signatures is presented. In particular, FWA is utilized to fit a set of known signatures to a measured spectrum by optimizing an objective function, where non-zero coefficients express the detected signatures. FWA is tested on a set of experimentallymore » obtained measurements optimizing various objective functions—MSE, RMSE, Theil-2, MAE, MAPE, MAP—with results exhibiting its potential in providing highly accurate and precise signature detection. Finally and furthermore, FWA is benchmarked against genetic algorithms and multiple linear regression, showing its superiority over those algorithms regarding precision with respect to MAE, MAPE, and MAP measures.« less

  9. An extrinsic fiber Fabry-Perot interferometer for dynamic displacement measurement

    NASA Astrophysics Data System (ADS)

    Pullteap, S.; Seat, H. C.

    2015-03-01

    A versatile fiber interferometer was proposed for high precision measurement. The sensor exploited a double-cavity within the unique sensing arm of an extrinsic-type fiber Fabry-Perot interferometer to produce the quadrature phase-shifted interference fringes. Interference signal processing was carried out using a modified zero-crossing (fringe) counting technique to demodulate two sets of fringes. The fiber interferometer has been successfully employed for dynamic displacement measurement under different displacement profiles over a range of 0.7 μm to 140 μm. A dedicated computer incorporating the demodulation algorithm was next used to interpret these detected data as well as plot the displacement information with a resolution of λ/64. A commercial displacement sensor was employed for comparison purposes with the experimental data obtained from the fiber interferometer as well as to gauge its performance, resulting in the maximum error of 2.8% over the entire displacement range studied.

  10. Motivation for DOC III: 64-bit digital optical computer

    NASA Astrophysics Data System (ADS)

    Guilfoyle, Peter S.

    1991-09-01

    OptiComp has focused on a digital optical logic family in order to capitalize on the inherent benefits of optical computing, which include (1) high FAN-IN and FAN-OUT, (2) low power consumption, (3) high noise margin, (4) high algorithmic efficiency using 'smart' interconnects, and (5) free-space leverage of gate interconnect bandwidth product. Other well-known secondary advantages of optical logic include zero capacitive loading of signals at a detector, zero cross-talk between signals, zero signal dispersion, and minimal clock skew (a few picoseconds or less in an imaging system). The primary focus of this paper is to demonstrate how each of the five advantages can be used to leverage other logic family performance such as GaAs; the secondary attributes are discussed only in the context of introducing the DOC III architecture.

  11. Real-time distributed fiber optic sensor for security systems: Performance, event classification and nuisance mitigation

    NASA Astrophysics Data System (ADS)

    Mahmoud, Seedahmed S.; Visagathilagar, Yuvaraja; Katsifolis, Jim

    2012-09-01

    The success of any perimeter intrusion detection system depends on three important performance parameters: the probability of detection (POD), the nuisance alarm rate (NAR), and the false alarm rate (FAR). The most fundamental parameter, POD, is normally related to a number of factors such as the event of interest, the sensitivity of the sensor, the installation quality of the system, and the reliability of the sensing equipment. The suppression of nuisance alarms without degrading sensitivity in fiber optic intrusion detection systems is key to maintaining acceptable performance. Signal processing algorithms that maintain the POD and eliminate nuisance alarms are crucial for achieving this. In this paper, a robust event classification system using supervised neural networks together with a level crossings (LCs) based feature extraction algorithm is presented for the detection and recognition of intrusion and non-intrusion events in a fence-based fiber-optic intrusion detection system. A level crossings algorithm is also used with a dynamic threshold to suppress torrential rain-induced nuisance alarms in a fence system. Results show that rain-induced nuisance alarms can be suppressed for rainfall rates in excess of 100 mm/hr with the simultaneous detection of intrusion events. The use of a level crossing based detection and novel classification algorithm is also presented for a buried pipeline fiber optic intrusion detection system for the suppression of nuisance events and discrimination of intrusion events. The sensor employed for both types of systems is a distributed bidirectional fiber-optic Mach-Zehnder (MZ) interferometer.

  12. Self-recovery fragile watermarking algorithm based on SPHIT

    NASA Astrophysics Data System (ADS)

    Xin, Li Ping

    2015-12-01

    A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.

  13. Evaluation of an Automatic Registration-Based Algorithm for Direct Measurement of Volume Change in Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Saradwata; Johnson, Timothy D.; Ma, Bing

    2012-07-01

    Purpose: Assuming that early tumor volume change is a biomarker for response to therapy, accurate quantification of early volume changes could aid in adapting an individual patient's therapy and lead to shorter clinical trials. We investigated an image registration-based approach for tumor volume change quantification that may more reliably detect smaller changes that occur in shorter intervals than can be detected by existing algorithms. Methods and Materials: Variance and bias of the registration-based approach were evaluated using retrospective, in vivo, very-short-interval diffusion magnetic resonance imaging scans where true zero tumor volume change is unequivocally known and synthetic data, respectively. Themore » interval scans were nonlinearly registered using two similarity measures: mutual information (MI) and normalized cross-correlation (NCC). Results: The 95% confidence interval of the percentage volume change error was (-8.93% to 10.49%) for MI-based and (-7.69%, 8.83%) for NCC-based registrations. Linear mixed-effects models demonstrated that error in measuring volume change increased with increase in tumor volume and decreased with the increase in the tumor's normalized mutual information, even when NCC was the similarity measure being optimized during registration. The 95% confidence interval of the relative volume change error for the synthetic examinations with known changes over {+-}80% of reference tumor volume was (-3.02% to 3.86%). Statistically significant bias was not demonstrated. Conclusion: A low-noise, low-bias tumor volume change measurement algorithm using nonlinear registration is described. Errors in change measurement were a function of tumor volume and the normalized mutual information content of the tumor.« less

  14. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion

    PubMed Central

    Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.

    2016-01-01

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation. PMID:27827836

  15. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    PubMed

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  16. Robust Structural Analysis and Design of Distributed Control Systems to Prevent Zero Dynamics Attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weerakkody, Sean; Liu, Xiaofei; Sinopoli, Bruno

    We consider the design and analysis of robust distributed control systems (DCSs) to ensure the detection of integrity attacks. DCSs are often managed by independent agents and are implemented using a diverse set of sensors and controllers. However, the heterogeneous nature of DCSs along with their scale leave such systems vulnerable to adversarial behavior. To mitigate this reality, we provide tools that allow operators to prevent zero dynamics attacks when as many as p agents and sensors are corrupted. Such a design ensures attack detectability in deterministic systems while removing the threat of a class of stealthy attacks in stochasticmore » systems. To achieve this goal, we use graph theory to obtain necessary and sufficient conditions for the presence of zero dynamics attacks in terms of the structural interactions between agents and sensors. We then formulate and solve optimization problems which minimize communication networks while also ensuring a resource limited adversary cannot perform a zero dynamics attacks. Polynomial time algorithms for design and analysis are provided.« less

  17. [Development of residual voltage testing equipment].

    PubMed

    Zeng, Xiaohui; Wu, Mingjun; Cao, Li; He, Jinyi; Deng, Zhensheng

    2014-07-01

    For the existing measurement methods of residual voltage which can't turn the power off at peak voltage exactly and simultaneously display waveforms, a new residual voltage detection method is put forward in this paper. First, the zero point of the power supply is detected with zero cross detection circuit and is inputted to a single-chip microcomputer in the form of pulse signal. Secend, when the zero point delays to the peak voltage, the single-chip microcomputer sends control signal to power off the relay. At last, the waveform of the residual voltage is displayed on a principal computer or oscilloscope. The experimental results show that the device designed in this paper can turn the power off at peak voltage and is able to accurately display the voltage waveform immediately after power off and the standard deviation of the residual voltage is less than 0.2 V at exactly one second and later.

  18. Adaptation of a cubic smoothing spline algortihm for multi-channel data stitching at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C; Adcock, A; Azevedo, S

    2010-12-28

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less

  19. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  20. Sound Propagation around Underwater Seamounts

    DTIC Science & Technology

    2009-02-01

    Algorithm 177 C.1 Processing Real World Data .................. ........ 178 C.2 Method for Finding Zero -crossings ................... .... 179 C.3 Handling...BASSEX experiment (figure is from Hyun Joe Kim, M IT, PhD Thesis) ................... .. .......... 25 2-2 Time front generated using the Range...30 2-4 Pressure level, given in dB re 1lPa, inside the forward-scattered field of the Kermit-Roosevelt Seamount. Results are generated using the RAM

  1. EM Adaptive LASSO—A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes

    PubMed Central

    Mallick, Himel; Tiwari, Hemant K.

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062

  2. EM Adaptive LASSO-A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes.

    PubMed

    Mallick, Himel; Tiwari, Hemant K

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.

  3. Golay Complementary Waveforms in Reed–Müller Sequences for Radar Detection of Nonzero Doppler Targets

    PubMed Central

    Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill

    2018-01-01

    Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708

  4. The Zero-Degree Detector System for Fragmentation Studies

    NASA Technical Reports Server (NTRS)

    Adams, J. H., Jr.; Christl, M. J.; Howell, L. W.; Kuznetsov, E.

    2006-01-01

    The measurement of nuclear fragmentation cross sections requires the detection and identification of individual projectile fragments. If light and heavy fragments are recorded in 'ne same detector, it may be impossible distinguish the signal from the light fragment. To overcome this problem, we have developed the Zero-Degree Detector System. The ZDDS enables the measurement of cross sections for light fragment production by using pixelated detectors to separately measure the signals of each fragment. The system has been used to measure the fragmentation of beams as heavy as Fe at the NASA Space Radiation Laboratory at Brookhaven National Laboratory and the Heavy Ion Medical Accelerator in Chiba, Japan.

  5. Applying the zero-inflated Poisson model with random effects to detect abnormal rises in school absenteeism indicating infectious diseases outbreak.

    PubMed

    Song, X X; Zhao, Q; Tao, T; Zhou, C M; Diwan, V K; Xu, B

    2018-05-30

    Records of absenteeism from primary schools are valuable data for infectious diseases surveillance. However, the analysis of the absenteeism is complicated by the data features of clustering at zero, non-independence and overdispersion. This study aimed to generate an appropriate model to handle the absenteeism data collected in a European Commission granted project for infectious disease surveillance in rural China and to evaluate the validity and timeliness of the resulting model for early warnings of infectious disease outbreak. Four steps were taken: (1) building a 'well-fitting' model by the zero-inflated Poisson model with random effects (ZIP-RE) using the absenteeism data from the first implementation year; (2) applying the resulting model to predict the 'expected' number of absenteeism events in the second implementation year; (3) computing the differences between the observations and the expected values (O-E values) to generate an alternative series of data; (4) evaluating the early warning validity and timeliness of the observational data and model-based O-E values via the EARS-3C algorithms with regard to the detection of real cluster events. The results indicate that ZIP-RE and its corresponding O-E values could improve the detection of aberrations, reduce the false-positive signals and are applicable to the zero-inflated data.

  6. Assessment of Gamma-Ray Spectra Analysis Method Utilizing the Fireworks Algorithm for various Error Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.

    2018-01-01

    Significant role in enhancing nuclear nonproliferation plays the analysis of obtained data and the inference of the presence or not of special nuclear materials in them. Among various types of measurements, gamma-ray spectra is the widest used type of data utilized for analysis in nonproliferation. In this chapter, a method that employs the fireworks algorithm (FWA) for analyzing gamma-ray spectra aiming at detecting gamma signatures is presented. In particular FWA is utilized to fit a set of known signatures to a measured spectrum by optimizing an objective function, with non-zero coefficients expressing the detected signatures. FWA is tested on amore » set of experimentally obtained measurements and various objective functions -MSE, RMSE, Theil-2, MAE, MAPE, MAP- with results exhibiting its potential in providing high accuracy and high precision of detected signatures. Furthermore, FWA is benchmarked against genetic algorithms, and multiple linear regression with results exhibiting its superiority over the rest tested algorithms with respect to precision for MAE, MAPE and MAP measures.« less

  7. Motivation for DOC III: 64-bit digital optical computer

    NASA Astrophysics Data System (ADS)

    Guilfoyle, Peter S.

    1991-09-01

    This paper suggests a new class of digital logic. OptiComp has focused on a digital optical logic family in order to capitalize on the inherent benefits of optical computing, which include (1) high FAN-IN and FAN-OUT, (2) low power consumption, (3) high noise margin, (4) high algorithmic efficiency using 'smart' interconnects, (5) free space leverage of GIBP (gate interconnect bandwidth product). Other well-known secondary advantages of optical logic include (but are not limited to) zero capacitive loading of signals at a detector, zero cross-talk between signals, zero signal dispersion, minimal clock skew (a few picoseconds or less in an imaging system). The primary focus of this paper is to demonstrate how each of the five advantages can be used to leverage other logic family performance such as GaAs; the secondary attributes will be discussed only in the context of introducing the DOC III architecture.

  8. Motivation for DOC III: 64-bit digital optical computer

    NASA Astrophysics Data System (ADS)

    Guilfoyle, Peter S.

    1991-09-01

    The objective of this paper is to motivate a new class of digital logic. OptiComp has focused on a digital optical logic family in order to capitalize on the inherent benefits of optical computing, which include: (1) high FAN-IN and FAN-OUT, (2) low power consumption, (3) high noise margin, (4) high algorithmic efficiency using 'smart' interconnects, (5) free space leverage of GIBP (gate interconnect bandwidth product). Other well-known secondary advantages of optical logic include (but are not limited to): zero capacitive loading of signals at a detector, zero cross-talk between signals, zero signal dispersion, and minimal clock skew (a few picoseconds or less in an imaging system). The primary focus of this paper is on demonstrating how each of the five advantages can be used to leverage other logic family performance such as GaAs; the secondary attributes will be discussed only in the context of introducing the DOC III architecture.

  9. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  10. Toward an Objective Enhanced-V Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Moses, John F.; Brunner,Jason C.; Feltz, Wayne F.; Ackerman, Steven A.; Moses, John F.; Rabin, Robert M.

    2007-01-01

    The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V signature, has been observed to occur during and preceding severe weather. This study describes an algorithmic approach to objectively detect overshooting tops, temperature couplets, and enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of temperature, temperature difference, and distance thresholds for the overshooting top and temperature couplet detection parts of the algorithm and consists of cross correlation statistics of pixels for the enhanced-V detection part of the algorithm. The effectiveness of the overshooting top and temperature couplet detection components of the algorithm is examined using GOES and MODIS image data for case studies in the 2003-2006 seasons. The main goal is for the algorithm to be useful for operations with future sensors, such as GOES-R.

  11. Is localized infrared spectroscopy now possible in the electron microscope?

    PubMed

    Rez, Peter

    2014-06-01

    The recently developed in-column monochromators make it possible to record energy-c spectra with resolutions better than 30 meV from nanometer-sized regions. It should therefore in principle be possible to detect localized vibrational excitations. The scattering geometry in the electron microscope means that bond stretching in the specimen plane or longitudinal optic phonons dominate the scattering. Most promising for initial studies are vibrations with energies between 300 and 400 meV from hydrogen bonded to other atoms. Estimates of the scattering cross-sections on the basis of a simple model show that they are about the same as inner shell scattering cross-sections. Cross-sections also increase with charge transfer between the atoms, and theory incorporating realistic charge distributions shows that signal/noise is the only limitation to high-resolution imaging. Given the magnitude of the scattering cross-sections, minimizing the tail of the zero-loss peak is just as important as achieving a small-width at half-maximum. Improvements in both resolution and controlling the zero-loss tail will be necessary before it is practical to detect optic phonons in solids between 40 and 60 meV.

  12. A-Track: Detecting Moving Objects in FITS images

    NASA Astrophysics Data System (ADS)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2017-04-01

    A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.

  13. Acoustic change detection algorithm using an FM radio

    NASA Astrophysics Data System (ADS)

    Goldman, Geoffrey H.; Wolfe, Owen

    2012-06-01

    The U.S. Army is interested in developing low-cost, low-power, non-line-of-sight sensors for monitoring human activity. One modality that is often overlooked is active acoustics using sources of opportunity such as speech or music. Active acoustics can be used to detect human activity by generating acoustic images of an area at different times, then testing for changes among the imagery. A change detection algorithm was developed to detect physical changes in a building, such as a door changing positions or a large box being moved using acoustics sources of opportunity. The algorithm is based on cross correlating the acoustic signal measured from two microphones. The performance of the algorithm was shown using data generated with a hand-held FM radio as a sound source and two microphones. The algorithm could detect a door being opened in a hallway.

  14. Detecting Pulsing Denial-of-Service Attacks with Nondeterministic Attack Intervals

    NASA Astrophysics Data System (ADS)

    Luo, Xiapu; Chan, Edmond W. W.; Chang, Rocky K. C.

    2009-12-01

    This paper addresses the important problem of detecting pulsing denial of service (PDoS) attacks which send a sequence of attack pulses to reduce TCP throughput. Unlike previous works which focused on a restricted form of attacks, we consider a very broad class of attacks. In particular, our attack model admits any attack interval between two adjacent pulses, whether deterministic or not. It also includes the traditional flooding-based attacks as a limiting case (i.e., zero attack interval). Our main contribution is Vanguard, a new anomaly-based detection scheme for this class of PDoS attacks. The Vanguard detection is based on three traffic anomalies induced by the attacks, and it detects them using a CUSUM algorithm. We have prototyped Vanguard and evaluated it on a testbed. The experiment results show that Vanguard is more effective than the previous methods that are based on other traffic anomalies (after a transformation using wavelet transform, Fourier transform, and autocorrelation) and detection algorithms (e.g., dynamic time warping).

  15. The automatic extraction of pitch perturbation using microcomputers: some methodological considerations.

    PubMed

    Deem, J F; Manning, W H; Knack, J V; Matesich, J S

    1989-09-01

    A program for the automatic extraction of jitter (PAEJ) was developed for the clinical measurement of pitch perturbations using a microcomputer. The program currently includes 12 implementations of an algorithm for marking the boundary criteria for a fundamental period of vocal fold vibration. The relative sensitivity of these extraction procedures for identifying the pitch period was compared using sine waves. Data obtained to date provide information for each procedure concerning the effects of waveform peakedness and slope, sample duration in cycles, noise level of the analysis system with both direct and tape recorded input, and the influence of interpolation. Zero crossing extraction procedures provided lower jitter values regardless of sine wave frequency or sample duration. The procedures making use of positive- or negative-going zero crossings with interpolation provided the lowest measures of jitter with the sine wave stimuli. Pilot data obtained with normal-speaking adults indicated that jitter measures varied as a function of the speaker, vowel, and sample duration.

  16. Note: An improved 3D imaging system for electron-electron coincidence measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  17. Note: An improved 3D imaging system for electron-electron coincidence measurements

    NASA Astrophysics Data System (ADS)

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-01

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  18. Clever eye algorithm for target detection of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Geng, Xiurui; Ji, Luyan; Sun, Kang

    2016-04-01

    Target detection algorithms for hyperspectral remote sensing imagery, such as the two most commonly used remote sensing detection algorithms, the constrained energy minimization (CEM) and matched filter (MF), can usually be attributed to the inner product between a weight filter (or detector) and a pixel vector. CEM and MF have the same expression except that MF requires data centralization first. However, this difference leads to a difference in the target detection results. That is to say, the selection of the data origin could directly affect the performance of the detector. Therefore, does there exist another data origin other than the zero and mean-vector points for a better target detection performance? This is a very meaningful issue in the field of target detection, but it has not been paid enough attention yet. In this study, we propose a novel objective function by introducing the data origin as another variable, and the solution of the function is corresponding to the data origin with the minimal output energy. The process of finding the optimal solution can be vividly regarded as a clever eye automatically searching the best observing position and direction in the feature space, which corresponds to the largest separation between the target and background. Therefore, this new algorithm is referred to as the clever eye algorithm (CE). Based on the Sherman-Morrison formula and the gradient ascent method, CE could derive the optimal target detection result in terms of energy. Experiments with both synthetic and real hyperspectral data have verified the effectiveness of our method.

  19. Quantitative morphometric analysis of hepatocellular carcinoma: development of a programmed algorithm and preliminary application.

    PubMed

    Yap, Felix Y; Bui, James T; Knuttinen, M Grace; Walzer, Natasha M; Cotler, Scott J; Owens, Charles A; Berkes, Jamie L; Gaba, Ron C

    2013-01-01

    The quantitative relationship between tumor morphology and malignant potential has not been explored in liver tumors. We designed a computer algorithm to analyze shape features of hepatocellular carcinoma (HCC) and tested feasibility of morphologic analysis. Cross-sectional images from 118 patients diagnosed with HCC between 2007 and 2010 were extracted at the widest index tumor diameter. The tumor margins were outlined, and point coordinates were input into a MATLAB (MathWorks Inc., Natick, Massachusetts, USA) algorithm. Twelve shape descriptors were calculated per tumor: the compactness, the mean radial distance (MRD), the RD standard deviation (RDSD), the RD area ratio (RDAR), the zero crossings, entropy, the mean Feret diameter (MFD), the Feret ratio, the convex hull area (CHA) and perimeter (CHP) ratios, the elliptic compactness (EC), and the elliptic irregularity (EI). The parameters were correlated with the levels of alpha-fetoprotein (AFP) as an indicator of tumor aggressiveness. The quantitative morphometric analysis was technically successful in all cases. The mean parameters were as follows: compactness 0.88±0.086, MRD 0.83±0.056, RDSD 0.087±0.037, RDAR 0.045±0.023, zero crossings 6±2.2, entropy 1.43±0.16, MFD 4.40±3.14 cm, Feret ratio 0.78±0.089, CHA 0.98±0.027, CHP 0.98±0.030, EC 0.95±0.043, and EI 0.95±0.023. MFD and RDAR provided the widest value range for the best shape discrimination. The larger tumors were less compact, more concave, and less ellipsoid than the smaller tumors (P < 0.0001). AFP-producing tumors displayed greater morphologic irregularity based on several parameters, including compactness, MRD, RDSD, RDAR, entropy, and EI (P < 0.05 for all). Computerized HCC image analysis using shape descriptors is technically feasible. Aggressively growing tumors have wider diameters and more irregular margins. Future studies will determine further clinical applications for this morphologic analysis.

  20. Comparative study of classification algorithms for damage classification in smart composite laminates

    NASA Astrophysics Data System (ADS)

    Khan, Asif; Ryoo, Chang-Kyung; Kim, Heung Soo

    2017-04-01

    This paper presents a comparative study of different classification algorithms for the classification of various types of inter-ply delaminations in smart composite laminates. Improved layerwise theory is used to model delamination at different interfaces along the thickness and longitudinal directions of the smart composite laminate. The input-output data obtained through surface bonded piezoelectric sensor and actuator is analyzed by the system identification algorithm to get the system parameters. The identified parameters for the healthy and delaminated structure are supplied as input data to the classification algorithms. The classification algorithms considered in this study are ZeroR, Classification via regression, Naïve Bayes, Multilayer Perceptron, Sequential Minimal Optimization, Multiclass-Classifier, and Decision tree (J48). The open source software of Waikato Environment for Knowledge Analysis (WEKA) is used to evaluate the classification performance of the classifiers mentioned above via 75-25 holdout and leave-one-sample-out cross-validation regarding classification accuracy, precision, recall, kappa statistic and ROC Area.

  1. C-band Joint Active/Passive Dual Polarization Sea Ice Detection

    NASA Astrophysics Data System (ADS)

    Keller, M. R.; Gifford, C. M.; Winstead, N. S.; Walton, W. C.; Dietz, J. E.

    2017-12-01

    A technique for synergistically-combining high-resolution SAR returns with like-frequency passive microwave emissions to detect thin (<30 cm) ice under the difficult conditions of late melt and freeze-up is presented. As the Arctic sea ice cover thins and shrinks, the algorithm offers an approach to adapting existing sensors monitoring thicker ice to provide continuing coverage. Lower resolution (10-26 km) ice detections with spaceborne radiometers and scatterometers are challenged by rapidly changing thin ice. Synthetic Aperture Radar (SAR) is high resolution (5-100m) but because of cross section ambiguities automated algorithms have had difficulty separating thin ice types from water. The radiometric emissivity of thin ice versus water at microwave frequencies is generally unambiguous in the early stages of ice growth. The method, developed using RADARSAT-2 and AMSR-E data, uses higher-ordered statistics. For the SAR, the COV (coefficient of variation, ratio of standard deviation to mean) has fewer ambiguities between ice and water than cross sections, but breaking waves still produce ice-like signatures for both polarizations. For the radiometer, the PRIC (polarization ratio ice concentration) identifies areas that are unambiguously water. Applying cumulative statistics to co-located COV levels adaptively determines an ice/water threshold. Outcomes from extensive testing with Sentinel and AMSR-2 data are shown in the results. The detection algorithm was applied to the freeze-up in the Beaufort, Chukchi, Barents, and East Siberian Seas in 2015 and 2016, spanning mid-September to early November of both years. At the end of the melt, 6 GHz PRIC values are 5-10% greater than those reported by radiometric algorithms at 19 and 37 GHz. During freeze-up, COV separates grease ice (<5 cm thick) from water. As the ice thickens, the COV is less reliable, but adding a mask based on either the PRIC or the cross-pol/co-pol SAR ratio corrects for COV deficiencies. In general, the dual-sensor detection algorithm reports 10-15% higher total ice concentrations than operational scatterometer or radiometer algorithms, mostly from ice edge and coastal areas. In conclusion, the algorithm presented combines high-resolution SAR returns with passive microwave emissions for automated ice detection at SAR resolutions.

  2. THz computed tomography system with zero-order Bessel beam

    NASA Astrophysics Data System (ADS)

    Niu, Liting; Wu, Qiao; Wang, Kejia; Liu, Jinsong; Yang, Zhengang

    2018-01-01

    Terahertz (THz) waves can penetrate many optically opaque dielectric materials such as plastics, ceramics and colorants. It is effective to reveal the internal structures of these materials. We have built a THz Computed Tomography (CT) system with 0.3 THz zero-order Bessel beam to improve the depth of focus of this imaging system for the non-diffraction property of Bessel beam. The THz CT system has been used to detect a paper cup with a metal rod inside. Finally, the acquired projection data have been processed by the filtered back-projection algorithm and the reconstructed image of the sample has been obtained.

  3. Accurate Singular Values and Differential QD Algorithms

    DTIC Science & Technology

    1992-07-01

    of the Cholesky Algorithm 5 4 The Quotient Difference Algorithm 8 5 Incorporation of Shifts 11 5.1 Shifted qd Algorithms...Effects of Finite Precision 18 7.1 Error Analysis - Overview ........ ........................... 18 7.2 High Relative Accuracy in the Presence of...showing that it was preferable to replace the DK zero-shift QR transform by two steps of zero-shift LR implemented in a qd (quotient- difference ) format

  4. Simultaneous segmentation of the bone and cartilage surfaces of a knee joint in 3D

    NASA Astrophysics Data System (ADS)

    Yin, Y.; Zhang, X.; Anderson, D. D.; Brown, T. D.; Hofwegen, C. Van; Sonka, M.

    2009-02-01

    We present a novel framework for the simultaneous segmentation of multiple interacting surfaces belonging to multiple mutually interacting objects. The method is a non-trivial extension of our previously reported optimal multi-surface segmentation. Considering an example application of knee-cartilage segmentation, the framework consists of the following main steps: 1) Shape model construction: Building a mean shape for each bone of the joint (femur, tibia, patella) from interactively segmented volumetric datasets. Using the resulting mean-shape model - identification of cartilage, non-cartilage, and transition areas on the mean-shape bone model surfaces. 2) Presegmentation: Employment of iterative optimal surface detection method to achieve approximate segmentation of individual bone surfaces. 3) Cross-object surface mapping: Detection of inter-bone equidistant separating sheets to help identify corresponding vertex pairs for all interacting surfaces. 4) Multi-object, multi-surface graph construction and final segmentation: Construction of a single multi-bone, multi-surface graph so that two surfaces (bone and cartilage) with zero and non-zero intervening distances can be detected for each bone of the joint, according to whether or not cartilage can be locally absent or present on the bone. To define inter-object relationships, corresponding vertex pairs identified using the separating sheets were interlinked in the graph. The graph optimization algorithm acted on the entire multiobject, multi-surface graph to yield a globally optimal solution. The segmentation framework was tested on 16 MR-DESS knee-joint datasets from the Osteoarthritis Initiative database. The average signed surface positioning error for the 6 detected surfaces ranged from 0.00 to 0.12 mm. When independently initialized, the signed reproducibility error of bone and cartilage segmentation ranged from 0.00 to 0.26 mm. The results showed that this framework provides robust, accurate, and reproducible segmentation of the knee joint bone and cartilage surfaces of the femur, tibia, and patella. As a general segmentation tool, the developed framework can be applied to a broad range of multi-object segmentation problems.

  5. An O(log sup 2 N) parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1989-01-01

    An O(log sup 2 N) parallel algorithm is presented for computing the eigenvalues of a symmetric tridiagonal matrix using a parallel algorithm for computing the zeros of the characteristic polynomial. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The exact behavior of the polynomials at the interval endpoints is used to eliminate the usual problems induced by finite precision arithmetic.

  6. Real-time method and apparatus for measuring the temperature of a fluorescing phosphor

    DOEpatents

    Britton, Jr., Charles L.; Beshears, David L.; Simpson, Marc L.; Cates, Michael R.; Allison, Steve W.

    1999-01-01

    A method for determining the temperature of a fluorescing phosphor is provided, together with an apparatus for performing the method. The apparatus includes a photodetector for detecting light emitted by a phosphor irradiated with an excitation pulse and for converting the detected light into an electrical signal. The apparatus further includes a differentiator for differentiating the electrical signal and a zero-crossing discrimination circuit that outputs a pulse signal having a pulse width corresponding to the time period between the start of the excitation pulse and the time when the differentiated electrical signal reaches zero. The width of the output pulse signal is proportional to the decay-time constant of the phosphor.

  7. A Pole-Zero Filter Cascade Provides Good Fits to Human Masking Data and to Basilar Membrane and Neural Data

    NASA Astrophysics Data System (ADS)

    Lyon, Richard F.

    2011-11-01

    A cascade of two-pole-two-zero filters with level-dependent pole and zero dampings, with few parameters, can provide a good match to human psychophysical and physiological data. The model has been fitted to data on detection threshold for tones in notched-noise masking, including bandwidth and filter shape changes over a wide range of levels, and has been shown to provide better fits with fewer parameters compared to other auditory filter models such as gammachirps. Originally motivated as an efficient machine implementation of auditory filtering related to the WKB analysis method of cochlear wave propagation, such filter cascades also provide good fits to mechanical basilar membrane data, and to auditory nerve data, including linear low-frequency tail response, level-dependent peak gain, sharp tuning curves, nonlinear compression curves, level-independent zero-crossing times in the impulse response, realistic instantaneous frequency glides, and appropriate level-dependent group delay even with minimum-phase response. As part of exploring different level-dependent parameterizations of such filter cascades, we have identified a simple sufficient condition for stable zero-crossing times, based on the shifting property of the Laplace transform: simply move all the s-domain poles and zeros by equal amounts in the real-s direction. Such pole-zero filter cascades are efficient front ends for machine hearing applications, such as music information retrieval, content identification, speech recognition, and sound indexing.

  8. Dielectrophoretic spectroscopy using a microscopic electrode array

    NASA Astrophysics Data System (ADS)

    Kirmani, Syed Abdul Mannan; Gudagunti, Fleming Dackson; Velmanickam, Logeeshan; Nawarathna, Dharmakeerthi; Lima, Ivan T.

    2017-02-01

    Dielectrophoresis (DEP) is a commonly used technique in biomedical engineering to manipulate biomolecules. DEP is defined as the force acting on dielectric particles when they are exposed to non-uniform electric fields. DEP effect can be divided in three categories: positive (dielectric particles are attracted to the electrodes), negative, and zero force DEP. The cross-over frequency is the frequency in which the DEP force is equal to zero. The cross-over frequency depends on the conductivity and the permittivity of the particles and of the suspended medium. The DEP cross-over frequency has been utilized in detecting/quantifying biomolecules. A manual procedure is commonly used to estimate the cross-over frequency of biomolecules. Therefore, the accuracy of this detection method is significantly limited. To address this issue, we designed and tested an automated procedure to carry out DEP spectroscopy in dielectric particles dissolved in a biological buffer solution. Our method efficiently measures the effect of the DEP force through a live video feed from the microscope camera and performs real-time image processing. It records the change in the fluorescence emission as the system automatically scans the electric frequency of the function generator over a specified time interval. We demonstrated the effectiveness of the method by extracting the crossover frequencies and the DEP spectrum of polystyrene beads with blue color dye (1000 nm diameter) and green fluorescent polystyrene beads with 500 nm diameter using this procedure. This approach can lead to the development of a biosensor with significantly higher sensitivity than existing detection methods.

  9. Achieving Crossed Strong Barrier Coverage in Wireless Sensor Network.

    PubMed

    Han, Ruisong; Yang, Wei; Zhang, Li

    2018-02-10

    Barrier coverage has been widely used to detect intrusions in wireless sensor networks (WSNs). It can fulfill the monitoring task while extending the lifetime of the network. Though barrier coverage in WSNs has been intensively studied in recent years, previous research failed to consider the problem of intrusion in transversal directions. If an intruder knows the deployment configuration of sensor nodes, then there is a high probability that it may traverse the whole target region from particular directions, without being detected. In this paper, we introduce the concept of crossed barrier coverage that can overcome this defect. We prove that the problem of finding the maximum number of crossed barriers is NP-hard and integer linear programming (ILP) is used to formulate the optimization problem. The branch-and-bound algorithm is adopted to determine the maximum number of crossed barriers. In addition, we also propose a multi-round shortest path algorithm (MSPA) to solve the optimization problem, which works heuristically to guarantee efficiency while maintaining near-optimal solutions. Several conventional algorithms for finding the maximum number of disjoint strong barriers are also modified to solve the crossed barrier problem and for the purpose of comparison. Extensive simulation studies demonstrate the effectiveness of MSPA.

  10. Research on the Forward and Reverse Calculation Based on the Adaptive Zero-Velocity Interval Adjustment for the Foot-Mounted Inertial Pedestrian-Positioning System

    PubMed Central

    Wang, Qiuying; Guo, Zheng; Sun, Zhiguo; Cui, Xufei; Liu, Kaiyue

    2018-01-01

    Pedestrian-positioning technology based on the foot-mounted micro inertial measurement unit (MIMU) plays an important role in the field of indoor navigation and has received extensive attention in recent years. However, the positioning accuracy of the inertial-based pedestrian-positioning method is rapidly reduced because of the relatively low measurement accuracy of the measurement sensor. The zero-velocity update (ZUPT) is an error correction method which was proposed to solve the cumulative error because, on a regular basis, the foot is stationary during the ordinary gait; this is intended to reduce the position error growth of the system. However, the traditional ZUPT has poor performance because the time of foot touchdown is short when the pedestrians move faster, which decreases the positioning accuracy. Considering these problems, a forward and reverse calculation method based on the adaptive zero-velocity interval adjustment for the foot-mounted MIMU location method is proposed in this paper. To solve the inaccuracy of the zero-velocity interval detector during fast pedestrian movement where the contact time of the foot on the ground is short, an adaptive zero-velocity interval detection algorithm based on fuzzy logic reasoning is presented in this paper. In addition, to improve the effectiveness of the ZUPT algorithm, forward and reverse multiple solutions are presented. Finally, with the basic principles and derivation process of this method, the MTi-G710 produced by the XSENS company is used to complete the test. The experimental results verify the correctness and applicability of the proposed method. PMID:29883399

  11. CLMSVault: A Software Suite for Protein Cross-Linking Mass-Spectrometry Data Analysis and Visualization.

    PubMed

    Courcelles, Mathieu; Coulombe-Huntington, Jasmin; Cossette, Émilie; Gingras, Anne-Claude; Thibault, Pierre; Tyers, Mike

    2017-07-07

    Protein cross-linking mass spectrometry (CL-MS) enables the sensitive detection of protein interactions and the inference of protein complex topology. The detection of chemical cross-links between protein residues can identify intra- and interprotein contact sites or provide physical constraints for molecular modeling of protein structure. Recent innovations in cross-linker design, sample preparation, mass spectrometry, and software tools have significantly improved CL-MS approaches. Although a number of algorithms now exist for the identification of cross-linked peptides from mass spectral data, a dearth of user-friendly analysis tools represent a practical bottleneck to the broad adoption of the approach. To facilitate the analysis of CL-MS data, we developed CLMSVault, a software suite designed to leverage existing CL-MS algorithms and provide intuitive and flexible tools for cross-platform data interpretation. CLMSVault stores and combines complementary information obtained from different cross-linkers and search algorithms. CLMSVault provides filtering, comparison, and visualization tools to support CL-MS analyses and includes a workflow for label-free quantification of cross-linked peptides. An embedded 3D viewer enables the visualization of quantitative data and the mapping of cross-linked sites onto PDB structural models. We demonstrate the application of CLMSVault for the analysis of a noncovalent Cdc34-ubiquitin protein complex cross-linked under different conditions. CLMSVault is open-source software (available at https://gitlab.com/courcelm/clmsvault.git ), and a live demo is available at http://democlmsvault.tyerslab.com/ .

  12. Semiautomated tremor detection using a combined cross-correlation and neural network approach

    NASA Astrophysics Data System (ADS)

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2013-09-01

    Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low-amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross-correlation technique, followed by a Self-Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being "semiautomated". We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal-to-noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal-to-noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.

  13. Experiments with conjugate gradient algorithms for homotopy curve tracking

    NASA Technical Reports Server (NTRS)

    Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.

    1991-01-01

    There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.

  14. Moving target parameter estimation of SAR after two looks cancellation

    NASA Astrophysics Data System (ADS)

    Gan, Rongbing; Wang, Jianguo; Gao, Xiang

    2005-11-01

    Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.

  15. The Even-Rho and Even-Epsilon Algorithms for Accelerating Convergence of a Numerical Sequence

    DTIC Science & Technology

    1981-12-01

    equal, leading to zero or very small divisors. Computer programs implementing these algorithms are given along with sample output. An appreciable amount...calculation of the array of Shank’s transforms or, -A equivalently, of the related Padd Table. The :other, the even-rho algorithm, is closely related...leading to zero or very small divisors. Computer pro- grams implementing these algorithms are given along with sample output. An appreciable amount or

  16. Real-time method and apparatus for measuring the decay-time constant of a fluorescing phosphor

    DOEpatents

    Britton, Jr., Charles L.; Beshears, David L.; Simpson, Marc L.; Cates, Michael R.; Allison, Steve W.

    1999-01-01

    A method for determining the decay-time constant of a fluorescing phosphor is provided, together with an apparatus for performing the method. The apparatus includes a photodetector for detecting light emitted by a phosphor irradiated with an excitation pulse and for converting the detected light into an electrical signal. The apparatus further includes a differentiator for differentiating the electrical signal and a zero-crossing discrimination circuit that outputs a pulse signal having a pulse width corresponding to the time period between the start of the excitation pulse and the time when the differentiated electrical signal reaches zero. The width of the output pulse signal is proportional to the decay-time constant of the phosphor.

  17. Entropy-functional-based online adaptive decision fusion framework with application to wildfire detection in video.

    PubMed

    Gunay, Osman; Toreyin, Behçet Ugur; Kose, Kivanc; Cetin, A Enis

    2012-05-01

    In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented.

  18. Method for determining and displaying the spacial distribution of a spectral pattern of received light

    DOEpatents

    Bennett, C.L.

    1996-07-23

    An imaging Fourier transform spectrometer is described having a Fourier transform infrared spectrometer providing a series of images to a focal plane array camera. The focal plane array camera is clocked to a multiple of zero crossing occurrences as caused by a moving mirror of the Fourier transform infrared spectrometer and as detected by a laser detector such that the frame capture rate of the focal plane array camera corresponds to a multiple of the zero crossing rate of the Fourier transform infrared spectrometer. The images are transmitted to a computer for processing such that representations of the images as viewed in the light of an arbitrary spectral ``fingerprint`` pattern can be displayed on a monitor or otherwise stored and manipulated by the computer. 2 figs.

  19. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.

  20. A Personal Inertial Navigation System Based on Multiple Distributed, Nine-Degrees-Of-Freedom, Inertial Measurement Units

    DTIC Science & Technology

    2016-12-01

    based complementary filter developed at the Naval Postgraduate School, is developed. The performance of a consumer-grade nine-degrees-of-freedom IMU...measurement unit, complementary filter , gait phase detection, zero velocity update, MEMS, IMU, AHRS, GPS denied, distributed sensor, virtual sensor...algorithm and quaternion-based complementary filter developed at the Naval Postgraduate School, is developed. The performance of a consumer-grade nine

  1. Biclustering sparse binary genomic data.

    PubMed

    van Uitert, Miranda; Meuleman, Wouter; Wessels, Lodewyk

    2008-12-01

    Genomic datasets often consist of large, binary, sparse data matrices. In such a dataset, one is often interested in finding contiguous blocks that (mostly) contain ones. This is a biclustering problem, and while many algorithms have been proposed to deal with gene expression data, only two algorithms have been proposed that specifically deal with binary matrices. None of the gene expression biclustering algorithms can handle the large number of zeros in sparse binary matrices. The two proposed binary algorithms failed to produce meaningful results. In this article, we present a new algorithm that is able to extract biclusters from sparse, binary datasets. A powerful feature is that biclusters with different numbers of rows and columns can be detected, varying from many rows to few columns and few rows to many columns. It allows the user to guide the search towards biclusters of specific dimensions. When applying our algorithm to an input matrix derived from TRANSFAC, we find transcription factors with distinctly dissimilar binding motifs, but a clear set of common targets that are significantly enriched for GO categories.

  2. On-line Flagging of Anomalies and Adaptive Sequential Hypothesis Testing for Fine-feature Characterization of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Payne, T.; Kinateder, K.; Dao, P.; Beecher, E.; Boone, D.; Elliott, B.

    The objective of on-line flagging in this paper is to perform interactive assessment of geosynchronous satellites anomalies such as cross-tagging of a satellites in a cluster, solar panel offset change, etc. This assessment will utilize a Bayesian belief propagation procedure and will include automated update of baseline signature data for the satellite, while accounting for the seasonal changes. Its purpose is to enable an ongoing, automated assessment of satellite behavior through its life cycle using the photometry data collected during the synoptic search performed by a ground or space-based sensor as a part of its metrics mission. The change in the satellite features will be reported along with the probabilities of Type I and Type II errors. The objective of adaptive sequential hypothesis testing in this paper is to define future sensor tasking for the purpose of characterization of fine features of the satellite. The tasking will be designed in order to maximize new information with the least number of photometry data points to be collected during the synoptic search by a ground or space-based sensor. Its calculation is based on the utilization of information entropy techniques. The tasking is defined by considering a sequence of hypotheses in regard to the fine features of the satellite. The optimal observation conditions are then ordered in order to maximize new information about a chosen fine feature. The combined objective of on-line flagging and adaptive sequential hypothesis testing is to progressively discover new information about the features of a geosynchronous satellites by leveraging the regular but sparse cadence of data collection during the synoptic search performed by a ground or space-based sensor. Automated Algorithm to Detect Changes in Geostationary Satellite's Configuration and Cross-Tagging Phan Dao, Air Force Research Laboratory/RVB By characterizing geostationary satellites based on photometry and color photometry, analysts can evaluate satellite operational status and affirm its true identity. The process of ingesting photometry data and deriving satellite physical characteristics can be directed by analysts in a batch mode, meaning using a batch of recent data, or by automated algorithms in an on-line mode in which the assessment is updated with each new data point. Tools used for detecting change to satellite's status or identity, whether performed with a human in the loop or automated algorithms, are generally not built to detect with minimum latency and traceable confidence intervals. To alleviate those deficiencies, we investigate the use of Hidden Markov Models (HMM), in a Bayesian Network framework, to infer the hidden state (changed or unchanged) of a three-axis stabilized geostationary satellite using broadband and color photometry. Unlike frequentist statistics which exploit only the stationary statistics of the observables in the database, HMM also exploits the temporal pattern of the observables as well. The algorithm also operates in “learning” mode to gradually evolve the HMM and accommodate natural changes such as due to the seasonal dependence of GEO satellite's light curve. Our technique is designed to operate with missing color data. The version that ingests both panchromatic and color data can accommodate gaps in color photometry data. That attribute is important because while color indices, e.g. Johnson R and B, enhance the belief (probability) of a hidden state, in real world situations, flux data is collected sporadically in an untasked collect, and color data is limited and sometimes absent. Fluxes are measured with experimental error whose effect on the algorithm will be studied. Photometry data in the AFRL's Geo Color Photometry Catalog and Geo Observations with Latitudinal Diversity Simultaneously (GOLDS) data sets are used to simulate a wide variety of operational changes and identity cross tags. The algorithm is tested against simulated sequences of observed magnitudes, mimicking both the cadence of untasked SSN and other ground sensors, occasional operational changes and possible occurrence of cross tags of in-cluster satellites. We would like to show that the on-line algorithm can detect change; sometimes right after the first post-change data point is analyzed, for zero latency. We also want to show the unsupervised “learning” capability that allows the HMM to evolve with time without user's assistance. For example, the users are not required to “label” the true state of the data points.

  3. Detection of bulk explosives using the GPR only portion of the HSTAMIDS system

    NASA Astrophysics Data System (ADS)

    Tabony, Joshua; Carlson, Douglas O.; Duvoisin, Herbert A., III; Torres-Rosario, Juan

    2010-04-01

    The legacy AN/PSS-14 (Army-Navy Portable Special Search-14) Handheld Mine Detecting Set (also called HSTAMIDS for Handheld Standoff Mine Detection System) has proven itself over the last 7 years as the state-of-the-art in land mine detection, both for the US Army and for Humanitarian Demining groups. Its dual GPR (Ground Penetrating Radar) and MD (Metal Detection) sensor has provided receiver operating characteristic curves (probability of detection or Pd versus false alarm rate or FAR) that routinely set the mark for such devices. Since its inception and type-classification in 2003 as the US (United States) Army standard, the desire for use of the AN/PSS-14 against alternate threats - such as bulk explosives - has recently become paramount. To this end, L-3 CyTerra has developed and tested bulk explosive detection and discrimination algorithms using only the Stepped Frequency Continuous Wave (SFCW) Ground Penetrating Radar (GPR) portion of the system, versus the fused version that is used to optimally detect land mines. Performance of the new bulk explosive algorithm against representative zero-metal bulk explosive target and clutter emplacements is depicted, with the utility to the operator also described.

  4. BMPix and PEAK tools: New methods for automated laminae recognition and counting—Application to glacial varves from Antarctic marine sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Pfeiffer, M.; Korff, B.; Thurow, J.; Ricken, W.

    2010-03-01

    We present tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray scale curves from images at pixel resolution. The PEAK tool uses the gray scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a smoothed curve. The zero-crossing algorithm counts every positive and negative halfway passage of the curve through a wide moving average, separating the record into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the curve into its frequency components before counting positive and negative passages. The algorithms are available at doi:10.1594/PANGAEA.729700. We applied the new methods successfully to tree rings, to well-dated and already manually counted marine varves from Saanich Inlet, and to marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that laminations in Weddell Sea sites represent varves, deposited continuously over several millennia during the last glacial maximum. The new tools offer several advantages over previous methods. The counting procedures are based on a moving average generated from gray scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Also, the PEAK tool measures the thickness of each year or season. Since all information required is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  5. Retrieval of Droplet size Density Distribution from Multiple field of view Cross polarized Lidar Signals: Theory and Experimental Validation

    DTIC Science & Technology

    2016-06-02

    Retrieval of droplet-size density distribution from multiple-field-of-view cross-polarized lidar signals: theory and experimental validation...theoretical and experimental studies of mul- tiple scattering and multiple-field-of-view (MFOV) li- dar detection have made possible the retrieval of cloud...droplet cloud are typical of Rayleigh scattering, with a signature close to a dipole (phase function quasi -flat and a zero-depolarization ratio

  6. Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo

    2018-04-01

    In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.

  7. Radiation Detection at Borders for Homeland Security

    NASA Astrophysics Data System (ADS)

    Kouzes, Richard

    2004-05-01

    Countries around the world are deploying radiation detection instrumentation to interdict the illegal shipment of radioactive material crossing international borders at land, rail, air, and sea ports of entry. These efforts include deployments in the US and a number of European and Asian countries by governments and international agencies. Items of concern include radiation dispersal devices (RDD), nuclear warheads, and special nuclear material (SNM). Radiation portal monitors (RPMs) are used as the main screening tool for vehicles and cargo at borders, supplemented by handheld detectors, personal radiation detectors, and x-ray imaging systems. Some cargo contains naturally occurring radioactive material (NORM) that triggers "nuisance" alarms in RPMs at these border crossings. Individuals treated with medical radiopharmaceuticals also produce nuisance alarms and can produce cross-talk between adjacent lanes of a multi-lane deployment. The operational impact of nuisance alarms can be significant at border crossings. Methods have been developed for reducing this impact without negatively affecting the requirements for interdiction of radioactive materials of interest. Plastic scintillator material is commonly used in RPMs for the detection of gamma rays from radioactive material, primarily due to the efficiency per unit cost compared to other detection materials. The resolution and lack of full-energy peaks in the plastic scintillator material prohibits detailed spectroscopy. However, the limited spectroscopic information from plastic scintillator can be exploited to provide some discrimination. Energy-based algorithms used in RPMs can effectively exploit the crude energy information available from a plastic scintillator to distinguish some NORM. Whenever NORM cargo limits the level of the alarm threshold, energy-based algorithms produce significantly better detection probabilities for small SNM sources than gross-count algorithms. This presentation discusses experience with RPMs for interdiction of radioactive materials at borders.

  8. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.

    PubMed

    Amudha, P; Karthik, S; Sivakumari, S

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.

  9. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    PubMed Central

    Amudha, P.; Karthik, S.; Sivakumari, S.

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  10. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search

    PubMed Central

    2017-01-01

    Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima. PMID:28634487

  11. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search.

    PubMed

    Huang, Xingwang; Zeng, Xuewen; Han, Rui

    2017-01-01

    Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.

  12. Edge detection - Image-plane versus digital processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.; Park, Stephen K.; Triplett, Judith A.

    1987-01-01

    To optimize edge detection with the familiar Laplacian-of-Gaussian operator, it has become common to implement this operator with a large digital convolution mask followed by some interpolation of the processed data to determine the zero crossings that locate edges. It is generally recognized that this large mask causes substantial blurring of fine detail. It is shown that the spatial detail can be improved by a factor of about four with either the Wiener-Laplacian-of-Gaussian filter or an image-plane processor. The Wiener-Laplacian-of-Gaussian filter minimizes the image-gathering degradations if the scene statistics are at least approximately known and also serves as an interpolator to determine the desired zero crossings directly. The image-plane processor forms the Laplacian-of-Gaussian response by properly combining the optical design of the image-gathering system with a minimal three-by-three lateral-inhibitory processing mask. This approach, which is suggested by Marr's model of early processing in human vision, also reduces data processing by about two orders of magnitude and data transmission by up to an order of magnitude.

  13. Cloverleaf microgyroscope with electrostatic alignment and tuning

    NASA Technical Reports Server (NTRS)

    Challoner, A. Dorian (Inventor); Gutierrez, Roman C. (Inventor); Tang, Tony K. (Inventor)

    2007-01-01

    A micro-gyroscope (10) having closed loop output operation by a control voltage (V.sub.ty), that is demodulated by a drive axis (x-axis) signal V.sub.thx of the sense electrodes (S1, S2), providing Coriolis torque rebalance to prevent displacement of the micro-gyroscope (10) on the output axis (y-axis) V.sub.thy.about.0. Closed loop drive axis torque, V.sub.tx maintains a constant drive axis amplitude signal, V.sub.thx. The present invention provides independent alignment and tuning of the micro-gyroscope by using separate electrodes and electrostatic bias voltages to adjust alignment and tuning. A quadrature amplitude signal, or cross-axis transfer function peak amplitude is used to detect misalignment that is corrected to zero by an electrostatic bias voltage adjustment. The cross-axis transfer function is either V.sub.thy/V.sub.ty or V.sub.tnx/V.sub.tx. A quadrature signal noise level, or difference in natural frequencies estimated from measurements of the transfer functions is used to detect residual mistuning, that is corrected to zero by a second electrostatic bias voltage adjustment.

  14. Fast island phase identification for tearing mode feedback control on J-TEXT tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, B., E-mail: borao@hust.edu.cn; Li, D.; Hu, F. R.

    A new method to control the tearing mode (TM) in tokamaks has been proposed [Q. Hu and Q. Yu, Nucl. Fusion 56, 034001 (5pp.) (2016)], according to which, the external resonant magnetic perturbation needs to be applied in certain magnetic island phase regions. Therefore, it is very important to identify the helical phase of magnetic islands in real time. The TM in tokamak plasmas is normally rotating and carries magnetic oscillations, which are known as Mirnov oscillations and can be detected by Mirnov probes. When the O-point or X-point of the magnetic island passes through the probe, the signal willmore » experience a zero-crossing. A poloidal Mirnov probe array and a corresponding island phase identification method are presented. A field-programmable gate array is used to provide the magnetic island helical phase in real time by using multichannel zero crossing detection. This system has been developed on the J-TEXT tokamak and works well. This paper introduces the establishment of the fast magnetic island phase identifying system.« less

  15. Effect of rotation zero-crossing on single-fluid plasma response to three-dimensional magnetic perturbations

    NASA Astrophysics Data System (ADS)

    Lyons, B. C.; Ferraro, N. M.; Paz-Soldan, C.; Nazikian, R.; Wingen, A.

    2017-04-01

    In order to understand the effect of rotation on the response of a plasma to three-dimensional magnetic perturbations, we perform a systematic scan of the zero-crossing of the rotation profile in a DIII-D ITER-similar shape equilibrium using linear, time-independent modeling with the M3D-C1 extended magnetohydrodynamics code. We confirm that the local resonant magnetic field generally increases as the rotation decreases at a rational surface. Multiple peaks in the resonant field are observed near rational surfaces, however, and the maximum resonant field does not always correspond to zero rotation at the surface. Furthermore, we show that non-resonant current can be driven at zero-crossings not aligned with rational surfaces if there is sufficient shear in the rotation profile there, leading to amplification of near-resonant Fourier harmonics of the perturbed magnetic field and a decrease in the far-off-resonant harmonics. The quasilinear electromagnetic torque induced by this non-resonant plasma response provides drive to flatten the rotation, possibly allowing for increased transport in the pedestal by the destabilization of turbulent modes. In addition, this torque acts to drive the rotation zero-crossing to dynamically stable points near rational surfaces, which would allow for increased resonant penetration. By one or both of these mechanisms, this torque may play an important role in bifurcations into suppression of edge-localized modes. Finally, we discuss how these changes to the plasma response could be detected by tokamak diagnostics. In particular, we show that the changes to the resonant field discussed here have a significant impact on the external perturbed magnetic field, which should be observable by magnetic sensors on the high-field side of tokamaks but not on the low-field side. In addition, TRIP3D-MAFOT simulations show that none of the changes to the plasma response described here substantially affects the divertor footprint structure.

  16. Effect of rotation zero-crossing on single-fluid plasma response to three-dimensional magnetic perturbations

    DOE PAGES

    Lyons, Brendan C.; Ferraro, Nathaniel M.; Paz-Soldan, Carlos A.; ...

    2017-02-14

    In order to understand the effect of rotation on the plasma's response to three-dimensional magnetic perturbations, we perform a systematic scan of the zero-crossing of the rotation profile in a DIII-D ITER-similar shape equilibrium using linear, time-independent modeling with the M3D-C1 extended magnetohydrodynamics code. We confirm that the local resonant magnetic field generally increases as the rotation decreases at a rational surface. Multiple peaks in the resonant field are observed near rational surfaces, however, and the maximum resonant field does not always correspond to zero rotation at the surface. Furthermore, we show that non-resonant current can be driven at zero-more » crossings not aligned with rational surfaces if there is sufficient shear in the rotation profile there, leading to an amplification of near-resonant Fourier harmonics of the perturbed magnetic field and a decrease in the far-off -resonant harmonics. The quasilinear electromagnetic torque induced by this non-resonant plasma response provides drive to flatten the rotation, possibly allowing for increased transport in the pedestal by the destabilization of turbulent modes. In addition, this torque acts to drive the rotation zero-crossing to dynamically stable points near rational surfaces, which would allow for increased resonant penetration. By one or both of these mechanisms, this torque may play an important role in bifurcations into ELM suppression. Finally, we discuss how these changes to the plasma response could be detected by tokamak diagnostics. In particular, we show that the changes to the resonant field discussed here have a significant impact on the external perturbed magnetic field, which should be observable by magnetic sensors on the high-field side of tokamaks, but not on the low-field side. In addition, TRIP3D-MAFOT simulations show that none of the changes to the plasma response described here substantially affects the divertor footprint structure.« less

  17. Numerical analysis of nonminimum phase zero for nonuniform link design

    NASA Technical Reports Server (NTRS)

    Girvin, Douglas L.; Book, Wayne J.

    1991-01-01

    As the demand for light-weight robots that can operate in a large workspace increases, the structural flexibility of the links becomes more of an issue in control. When the objective is to accurately position the tip while the robot is actuated at the base, the system is nonminimum phase. One important characteristic of nonminimum phase systems is system zeros in the right half of the Laplace plane. The ability to pick the location of these nonminimum phase zeros would give the designer a new freedom similar to pole placement. This research targets a single-link manipulator operating in the horizontal plane and modeled as a Euler-Bernoulli beam with pinned-free end conditions. Using transfer matrix theory, one can consider link designs that have variable cross-sections along the length of the beam. A FORTRAN program was developed to determine the location of poles and zeros given the system model. The program was used to confirm previous research on nonminimum phase systems, and develop a relationship for designing linearly tapered links. The method allows the designer to choose the location of the first pole and zero and then defines the appropriate taper to match the desired locations. With the pole and zero location fixed, the designer can independently change the link's moment of inertia about its axis of rotation by adjusting the height of the beam. These results can be applied to the inverse dynamic algorithms that are currently under development.

  18. Numerical analysis of nonminimum phase zero for nonuniform link design

    NASA Astrophysics Data System (ADS)

    Girvin, Douglas L.; Book, Wayne J.

    1991-11-01

    As the demand for light-weight robots that can operate in a large workspace increases, the structural flexibility of the links becomes more of an issue in control. When the objective is to accurately position the tip while the robot is actuated at the base, the system is nonminimum phase. One important characteristic of nonminimum phase systems is system zeros in the right half of the Laplace plane. The ability to pick the location of these nonminimum phase zeros would give the designer a new freedom similar to pole placement. This research targets a single-link manipulator operating in the horizontal plane and modeled as a Euler-Bernoulli beam with pinned-free end conditions. Using transfer matrix theory, one can consider link designs that have variable cross-sections along the length of the beam. A FORTRAN program was developed to determine the location of poles and zeros given the system model. The program was used to confirm previous research on nonminimum phase systems, and develop a relationship for designing linearly tapered links. The method allows the designer to choose the location of the first pole and zero and then defines the appropriate taper to match the desired locations. With the pole and zero location fixed, the designer can independently change the link's moment of inertia about its axis of rotation by adjusting the height of the beam. These results can be applied to the inverse dynamic algorithms that are currently under development.

  19. Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology.

    PubMed

    Hsu, Yu-Liang; Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen

    2017-07-15

    This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents' wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident's feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment.

  20. Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology

    PubMed Central

    Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen

    2017-01-01

    This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents’ wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident’s feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment. PMID:28714884

  1. ZeroCal: Automatic MAC Protocol Calibration

    NASA Astrophysics Data System (ADS)

    Meier, Andreas; Woehrle, Matthias; Zimmerling, Marco; Thiele, Lothar

    Sensor network MAC protocols are typically configured for an intended deployment scenario once and for all at compile time. This approach, however, leads to suboptimal performance if the network conditions deviate from the expectations. We present ZeroCal, a distributed algorithm that allows nodes to dynamically adapt to variations in traffic volume. Using ZeroCal, each node autonomously configures its MAC protocol at runtime, thereby trying to reduce the maximum energy consumption among all nodes. While the algorithm is readily usable for any asynchronous low-power listening or low-power probing protocol, we validate and demonstrate the effectiveness of ZeroCal on X-MAC. Extensive testbed experiments and simulations indicate that ZeroCal quickly adapts to traffic variations. We further show that ZeroCal extends network lifetime by 50% compared to an optimal configuration with identical and static MAC parameters at all nodes.

  2. A Multiscale pipeline for the search of string-induced CMB anisotropies

    NASA Astrophysics Data System (ADS)

    Vafaei Sadr, A.; Movahed, S. M. S.; Farhang, M.; Ringeval, C.; Bouchet, F. R.

    2018-03-01

    We propose a multiscale edge-detection algorithm to search for the Gott-Kaiser-Stebbins imprints of a cosmic string (CS) network on the cosmic microwave background (CMB) anisotropies. Curvelet decomposition and extended Canny algorithm are used to enhance the string detectability. Various statistical tools are then applied to quantify the deviation of CMB maps having a CS contribution with respect to pure Gaussian anisotropies of inflationary origin. These statistical measures include the one-point probability density function, the weighted two-point correlation function (TPCF) of the anisotropies, the unweighted TPCF of the peaks and of the up-crossing map, as well as their cross-correlation. We use this algorithm on a hundred of simulated Nambu-Goto CMB flat sky maps, covering approximately 10 per cent of the sky, and for different string tensions Gμ. On noiseless sky maps with an angular resolution of 0.9 arcmin, we show that our pipeline detects CSs with Gμ as low as Gμ ≳ 4.3 × 10-10. At the same resolution, but with a noise level typical to a CMB-S4 phase II experiment, the detection threshold would be to Gμ ≳ 1.2 × 10-7.

  3. Cross-layer design for intrusion detection and data security in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2007-09-01

    A wireless ad hoc sensor network is a configuration for area surveillance that affords rapid, flexible deployment in arbitrary threat environments. There is no infrastructure support and sensor nodes communicate with each other only when they are in transmission range. The nodes are severely resource-constrained, with limited processing, memory and power capacities and must operate cooperatively to fulfill a common mission in typically unattended modes. In a wireless sensor network (WSN), each sensor at a node can observe locally some underlying physical phenomenon and sends a quantized version of the observation to sink (destination) nodes via wireless links. Since the wireless medium can be easily eavesdropped, links can be compromised by intrusion attacks from nodes that may mount denial-of-service attacks or insert spurious information into routing packets, leading to routing loops, long timeouts, impersonation, and node exhaustion. A cross-layer design based on protocol-layer interactions is proposed for detection and identification of various intrusion attacks on WSN operation. A feature set is formed from selected cross-layer parameters of the WSN protocol to detect and identify security threats due to intrusion attacks. A separate protocol is not constructed from the cross-layer design; instead, security attributes and quantified trust levels at and among nodes established during data exchanges complement customary WSN metrics of energy usage, reliability, route availability, and end-to-end quality-of-service (QoS) provisioning. Statistical pattern recognition algorithms are applied that use observed feature-set patterns observed during network operations, viewed as security audit logs. These algorithms provide the "best" network global performance in the presence of various intrusion attacks. A set of mobile (software) agents distributed at the nodes implement the algorithms, by moving among the layers involved in the network response at each active node and trust neighborhood, collecting parametric information and executing assigned decision tasks. The communications overhead due to security mechanisms and the latency in network response are thus minimized by reducing the need to move large amounts of audit data through resource-limited nodes and by locating detection/identification programs closer to audit data. If network partitioning occurs due to uncoordinated node exhaustion, data compromise or other effects of the attacks, the mobile agents can continue to operate, thereby increasing fault tolerance in the network response to intrusions. Since the mobile agents behave like an ant colony in securing the WSN, published ant colony optimization (ACO) routines and other evolutionary algorithms are adapted to protect network security, using data at and through nodes to create audit records to detect and respond to denial-of-service attacks. Performance evaluations of algorithms are performed by simulation of a few intrusion attacks, such as black hole, flooding, Sybil and others, to validate the ability of the cross-layer algorithms to enable WSNs to survive the attacks. Results are compared for the different algorithms.

  4. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  5. Joint channel estimation and multi-user detection for multipath fading channels in DS-CDMA systems

    NASA Astrophysics Data System (ADS)

    Wu, Sau-Hsuan; Kuo, C.-C. Jay

    2002-11-01

    The technique of joint blind channel estimation and multiple access interference (MAI) suppression for an asynchronous code-division multiple-access (CDMA) system is investigated in this research. To identify and track dispersive time-varying fading channels and to avoid the phase ambiguity that come with the second-order statistic approaches, a sliding-window scheme using the expectation maximization (EM) algorithm is proposed. The complexity of joint channel equalization and symbol detection for all users increases exponentially with system loading and the channel memory. The situation is exacerbated if strong inter-symbol interference (ISI) exists. To reduce the complexity and the number of samples required for channel estimation, a blind multiuser detector is developed. Together with multi-stage interference cancellation using soft outputs provided by this detector, our algorithm can track fading channels with no phase ambiguity even when channel gains attenuate close to zero.

  6. A Distributed Wireless Camera System for the Management of Parking Spaces.

    PubMed

    Vítek, Stanislav; Melničuk, Petr

    2017-12-28

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  7. Testing and Validating Machine Learning Classifiers by Metamorphic Testing☆

    PubMed Central

    Xie, Xiaoyuan; Ho, Joshua W. K.; Murphy, Christian; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh

    2011-01-01

    Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no “test oracle” to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique “metamorphic testing”, which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program. PMID:21532969

  8. Metabolomics variable selection and classification in the presence of observations below the detection limit using an extension of ERp.

    PubMed

    van Reenen, Mari; Westerhuis, Johan A; Reinecke, Carolus J; Venter, J Hendrik

    2017-02-02

    ERp is a variable selection and classification method for metabolomics data. ERp uses minimized classification error rates, based on data from a control and experimental group, to test the null hypothesis of no difference between the distributions of variables over the two groups. If the associated p-values are significant they indicate discriminatory variables (i.e. informative metabolites). The p-values are calculated assuming a common continuous strictly increasing cumulative distribution under the null hypothesis. This assumption is violated when zero-valued observations can occur with positive probability, a characteristic of GC-MS metabolomics data, disqualifying ERp in this context. This paper extends ERp to address two sources of zero-valued observations: (i) zeros reflecting the complete absence of a metabolite from a sample (true zeros); and (ii) zeros reflecting a measurement below the detection limit. This is achieved by allowing the null cumulative distribution function to take the form of a mixture between a jump at zero and a continuous strictly increasing function. The extended ERp approach is referred to as XERp. XERp is no longer non-parametric, but its null distributions depend only on one parameter, the true proportion of zeros. Under the null hypothesis this parameter can be estimated by the proportion of zeros in the available data. XERp is shown to perform well with regard to bias and power. To demonstrate the utility of XERp, it is applied to GC-MS data from a metabolomics study on tuberculosis meningitis in infants and children. We find that XERp is able to provide an informative shortlist of discriminatory variables, while attaining satisfactory classification accuracy for new subjects in a leave-one-out cross-validation context. XERp takes into account the distributional structure of data with a probability mass at zero without requiring any knowledge of the detection limit of the metabolomics platform. XERp is able to identify variables that discriminate between two groups by simultaneously extracting information from the difference in the proportion of zeros and shifts in the distributions of the non-zero observations. XERp uses simple rules to classify new subjects and a weight pair to adjust for unequal sample sizes or sensitivity and specificity requirements.

  9. Zero-crossing statistics for non-Markovian time series.

    PubMed

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  10. Zero-crossing statistics for non-Markovian time series

    NASA Astrophysics Data System (ADS)

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  11. Automated Identification of Initial Storm Electrification and End-of-Storm Electrification Using Electric Field Mill Sensors

    NASA Technical Reports Server (NTRS)

    Maier, Launa M.; Huddleston, Lisa L.

    2017-01-01

    Kennedy Space Center (KSC) operations are located in a region which experiences one of the highest lightning densities across the United States. As a result, on average, KSC loses almost 30 minutes of operational availability each day for lightning sensitive activities. KSC is investigating using existing instrumentation and automated algorithms to improve the timeliness and accuracy of lightning warnings. Additionally, the automation routines will be warning on a grid to minimize under-warnings associated with not being located in the center of the warning area and over-warnings associated with encompassing too large an area. This study discusses utilization of electric field mill data to provide improved warning times. Specifically, this paper will demonstrate improved performance of an enveloping algorithm of the electric field mill data as compared with the electric field zero crossing to identify initial storm electrification. End-of-Storm-Oscillation (EOSO) identification algorithms will also be analyzed to identify performance improvement, if any, when compared with 30 minutes after the last lightning flash.

  12. SU-E-J-117: Verification Method for the Detection Accuracy of Automatic Winston Lutz Test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, A; Chan, K; Fee, F

    2014-06-01

    Purpose: Winston Lutz test (WLT) has been a standard QA procedure performed prior to SRS treatment, to verify the mechanical iso-center setup accuracy upon different Gantry/Couch movements. Several detection algorithms exist,for analyzing the ball-radiation field alignment automatically. However, the accuracy of these algorithms have not been fully addressed. Here, we reveal the possible errors arise from each step in WLT, and verify the software detection accuracy with the Rectilinear Phantom Pointer (RLPP), a tool commonly used for aligning treatment plan coordinate with mechanical iso-center. Methods: WLT was performed with the radio-opaque ball mounted on a MIS and irradiated onto EDR2more » films. The films were scanned and processed with an in-house Matlab program for automatic iso-center detection. Tests were also performed to identify the errors arise from setup, film development and scanning process. The radioopaque ball was then mounted onto the RLPP, and offset laterally and longitudinally in 7 known positions ( 0, ±0.2, ±0.5, ±0.8 mm) manually for irradiations. The gantry and couch was set to zero degree for all irradiation. The same scanned images were processed repeatedly to check the repeatability of the software. Results: Miminal discrepancies (mean=0.05mm) were detected with 2 films overlapped and irradiated but developed separately. This reveals the error arise from film processor and scanner alone. Maximum setup errors were found to be around 0.2mm, by analyzing data collected from 10 irradiations over 2 months. For the known shift introduced using the RLPP, the results agree with the manual offset, and fit linearly (R{sup 2}>0.99) when plotted relative to the first ball with zero shift. Conclusion: We systematically reveal the possible errors arise from each step in WLT, and introduce a simple method to verify the detection accuracy of our in-house software using a clinically available tool.« less

  13. Method for determining and displaying the spacial distribution of a spectral pattern of received light

    DOEpatents

    Bennett, Charles L.

    1996-01-01

    An imaging Fourier transform spectrometer (10, 210) having a Fourier transform infrared spectrometer (12) providing a series of images (40) to a focal plane array camera (38). The focal plane array camera (38) is clocked to a multiple of zero crossing occurrences as caused by a moving mirror (18) of the Fourier transform infrared spectrometer (12) and as detected by a laser detector (50) such that the frame capture rate of the focal plane array camera (38) corresponds to a multiple of the zero crossing rate of the Fourier transform infrared spectrometer (12). The images (40) are transmitted to a computer (45) for processing such that representations of the images (40) as viewed in the light of an arbitrary spectral "fingerprint" pattern can be displayed on a monitor (60) or otherwise stored and manipulated by the computer (45).

  14. A combined approach for weak fault signature extraction of rolling element bearing using Hilbert envelop and zero frequency resonator

    NASA Astrophysics Data System (ADS)

    Kumar, Keshav; Shukla, Sumitra; Singh, Sachin Kumar

    2018-04-01

    Periodic impulses arise due to localised defects in rolling element bearing. At the early stage of defects, the weak impulses are immersed in strong machinery vibration. This paper proposes a combined approach based upon Hilbert envelop and zero frequency resonator for the detection of the weak periodic impulses. In the first step, the strength of impulses is increased by taking normalised Hilbert envelop of the signal. It also helps in better localization of these impulses on time axis. In the second step, Hilbert envelope of the signal is passed through the zero frequency resonator for the exact localization of the periodic impulses. Spectrum of the resonator output gives peak at the fault frequency. Simulated noisy signal with periodic impulses is used to explain the working of the algorithm. The proposed technique is verified with experimental data also. A comparison of the proposed method with Hilbert-Haung transform (HHT) based method is presented to establish the effectiveness of the proposed method.

  15. Measurement of the muon antineutrino double-differential cross section for quasielastic-like scattering on hydrocarbon at Eν˜3.5 GeV

    NASA Astrophysics Data System (ADS)

    Patrick, C. E.; Aliaga, L.; Bashyal, A.; Bellantoni, L.; Bercellie, A.; Betancourt, M.; Bodek, A.; Bravar, A.; Budd, H.; Caceres v., G. F. R.; Carneiro, M. F.; Chavarria, E.; da Motta, H.; Dytman, S. A.; Díaz, G. A.; Felix, J.; Fields, L.; Fine, R.; Gago, A. M.; Galindo, R.; Gallagher, H.; Ghosh, A.; Gran, R.; Han, J. Y.; Harris, D. A.; Henry, S.; Hurtado, K.; Jena, D.; Kleykamp, J.; Kordosky, M.; Le, T.; Lu, X.-G.; Maher, E.; Manly, S.; Mann, W. A.; Marshall, C. M.; McFarland, K. S.; McGowan, A. M.; Messerly, B.; Miller, J.; Mislivec, A.; Morfín, J. G.; Mousseau, J.; Naples, D.; Nelson, J. K.; Norrick, A.; Nowak, G. M.; Nuruzzaman, Paolone, V.; Perdue, G. N.; Peters, E.; Ramírez, M. A.; Ransome, R. D.; Ray, H.; Ren, L.; Rodrigues, P. A.; Ruterbories, D.; Schellman, H.; Solano Salinas, C. J.; Sultana, M.; Sánchez Falero, S.; Teklu, A. M.; Valencia, E.; Wolcott, J.; Wospakrik, M.; Yaeggy, B.; Zhang, D.; Miner ν A Collaboration

    2018-03-01

    We present double-differential measurements of antineutrino charged-current quasielastic scattering in the MINERvA detector. This study improves on a previous single-differential measurement by using updated reconstruction algorithms and interaction models and provides a complete description of observed muon kinematics in the form of a double-differential cross section with respect to muon transverse and longitudinal momentum. We include in our signal definition zero-meson final states arising from multinucleon interactions and from resonant pion production followed by pion absorption in the primary nucleus. We find that model agreement is considerably improved by a model tuned to MINERvA inclusive neutrino scattering data that incorporates nuclear effects such as weak nuclear screening and two-particle, two-hole enhancements.

  16. Toward an Objective Enhanced-V Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Brunner, Jason; Feltz, Wayne; Moses, John; Rabin, Robert; Ackerman, Steven

    2007-01-01

    The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V' signature, has been observed to occur during and preceding severe weather in previous studies. This study describes an algorithmic approach to objectively detect enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of cross correlation statistics of pixels and thresholds of enhanced-V quantitative parameters. The effectiveness of the enhanced-V detection method will be examined using Geostationary Operational Environmental Satellite, MODerate-resolution Imaging Spectroradiometer, and Advanced Very High Resolution Radiometer image data from case studies in the 2003-2006 seasons. The main goal of this study is to develop an objective enhanced-V detection algorithm for future implementation into operations with future sensors, such as GOES-R.

  17. A Dual-Channel Acquisition Method Based on Extended Replica Folding Algorithm for Long Pseudo-Noise Code in Inter-Satellite Links.

    PubMed

    Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen

    2018-05-25

    Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection probability and lower false alarm probability, it has a lower mean acquisition time than traditional XFAST, DF-XFAST and zero-padding.

  18. Dielectric waveguide with transverse index variation that support a zero group velocity mode at a non-zero longitudinal wavevector

    DOEpatents

    Ibanescu, Mihai; Joannopoious, John D.; Fink, Yoel; Johnson, Steven G.; Fan, Shanhui

    2005-06-21

    Optical components including a laser based on a dielectric waveguide extending along a waveguide axis and having a refractive index cross-section perpendicular to the waveguide axis, the refractive index cross-section supporting an electromagnetic mode having a zero group velocity for a non-zero wavevector along the waveguide axis.

  19. QRS detection based ECG quality assessment.

    PubMed

    Hayn, Dieter; Jammerbund, Bernhard; Schreier, Günter

    2012-09-01

    Although immediate feedback concerning ECG signal quality during recording is useful, up to now not much literature describing quality measures is available. We have implemented and evaluated four ECG quality measures. Empty lead criterion (A), spike detection criterion (B) and lead crossing point criterion (C) were calculated from basic signal properties. Measure D quantified the robustness of QRS detection when applied to the signal. An advanced Matlab-based algorithm combining all four measures and a simplified algorithm for Android platforms, excluding measure D, were developed. Both algorithms were evaluated by taking part in the Computing in Cardiology Challenge 2011. Each measure's accuracy and computing time was evaluated separately. During the challenge, the advanced algorithm correctly classified 93.3% of the ECGs in the training-set and 91.6 % in the test-set. Scores for the simplified algorithm were 0.834 in event 2 and 0.873 in event 3. Computing time for measure D was almost five times higher than for other measures. Required accuracy levels depend on the application and are related to computing time. While our simplified algorithm may be accurate for real-time feedback during ECG self-recordings, QRS detection based measures can further increase the performance if sufficient computing power is available.

  20. A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.

    2013-07-01

    There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less

  1. The artificial-free technique along the objective direction for the simplex algorithm

    NASA Astrophysics Data System (ADS)

    Boonperm, Aua-aree; Sinapiromsaran, Krung

    2014-03-01

    The simplex algorithm is a popular algorithm for solving linear programming problems. If the origin point satisfies all constraints then the simplex can be started. Otherwise, artificial variables will be introduced to start the simplex algorithm. If we can start the simplex algorithm without using artificial variables then the simplex iterate will require less time. In this paper, we present the artificial-free technique for the simplex algorithm by mapping the problem into the objective plane and splitting constraints into three groups. In the objective plane, one of variables which has a nonzero coefficient of the objective function is fixed in terms of another variable. Then it can split constraints into three groups: the positive coefficient group, the negative coefficient group and the zero coefficient group. Along the objective direction, some constraints from the positive coefficient group will form the optimal solution. If the positive coefficient group is nonempty, the algorithm starts with relaxing constraints from the negative coefficient group and the zero coefficient group. We guarantee the feasible region obtained from the positive coefficient group to be nonempty. The transformed problem is solved using the simplex algorithm. Additional constraints from the negative coefficient group and the zero coefficient group will be added to the solved problem and use the dual simplex method to determine the new optimal solution. An example shows the effectiveness of our algorithm.

  2. Numerical analysis of right-half plane zeros for a single-link manipulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Girvin, Douglas Lynn

    1992-01-01

    The purpose of this research is to further develop an understanding of how nonminimum phase zero location is affected by structural link design. As the demand for light-weight robots that can operate in a large workspace increases, the structural flexibility of the links become more of an issue in controls problems. When the objective is to accurately position the tip while the robot is actuated at the base, the system is nonminimum phase. One important characteristic of nonminimum phase systems is system zeros in the right half of the Laplace plane. The ability to pick the location of these nonminimum phase zeros would give the designer a new freedom similar to pole placement. The research targets a single-link manipulator operating in the horizontal plane and modeled as a Euler-Bernoulli beam with pinned-free end conditions. Using transfer matrix theory, one can consider link designs that have variable cross-sections along the length of the beam. A FORTRAN program was developed to determine the location of poles and zeros given the system model. The program was used to confirm previous research on nonminimum phase systems, and develop a relationship for designing linearly tapered links. The method allows the designer to choose the location of the first pole and zero and then defines the appropriate taper to match the desired locations. With the pole and zero location fixes, the designer can independently change the link's moment of inertia about its axis of rotation by adjusting the height of the beam. These results can be applied to inverse dynamic algorithms currently under development at Georgia Tech.

  3. Late summer sea ice segmentation with multi-polarisation SAR features in C- and X-band

    NASA Astrophysics Data System (ADS)

    Fors, A. S.; Brekke, C.; Doulgeris, A. P.; Eltoft, T.; Renner, A. H. H.; Gerland, S.

    2015-09-01

    In this study we investigate the potential of sea ice segmentation by C- and X-band multi-polarisation synthetic aperture radar (SAR) features during late summer. Five high-resolution satellite SAR scenes were recorded in the Fram Strait covering iceberg-fast first-year and old sea ice during a week with air temperatures varying around zero degrees Celsius. In situ data consisting of sea ice thickness, surface roughness and aerial photographs were collected during a helicopter flight at the site. Six polarimetric SAR features were extracted for each of the scenes. The ability of the individual SAR features to discriminate between sea ice types and their temporally consistency were examined. All SAR features were found to add value to sea ice type discrimination. Relative kurtosis, geometric brightness, cross-polarisation ratio and co-polarisation correlation angle were found to be temporally consistent in the investigated period, while co-polarisation ratio and co-polarisation correlation magnitude were found to be temporally inconsistent. An automatic feature-based segmentation algorithm was tested both for a full SAR feature set, and for a reduced SAR feature set limited to temporally consistent features. In general, the algorithm produces a good late summer sea ice segmentation. Excluding temporally inconsistent SAR features improved the segmentation at air temperatures above zero degrees Celcius.

  4. Magnetic and magnetocaloric properties of the exactly solvable mixed-spin Ising model on a decorated triangular lattice in a magnetic field

    NASA Astrophysics Data System (ADS)

    Gálisová, Lucia; Strečka, Jozef

    2018-05-01

    The ground state, zero-temperature magnetization process, critical behaviour and isothermal entropy change of the mixed-spin Ising model on a decorated triangular lattice in a magnetic field are exactly studied after performing the generalized decoration-iteration mapping transformation. It is shown that both the inverse and conventional magnetocaloric effect can be found near the absolute zero temperature. The former phenomenon can be found in a vicinity of the discontinuous phase transitions and their crossing points, while the latter one occurs in some paramagnetic phases due to a spin frustration to be present at zero magnetic field. The inverse magnetocaloric effect can also be detected slightly above continuous phase transitions following the power-law dependence | - ΔSisomin | ∝hn, where n depends basically on the ground-state spin ordering.

  5. Adaptive sequential controller

    DOEpatents

    El-Sharkawi, Mohamed A.; Xing, Jian; Butler, Nicholas G.; Rodriguez, Alonso

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  6. A lightweight QRS detector for single lead ECG signals using a max-min difference algorithm.

    PubMed

    Pandit, Diptangshu; Zhang, Li; Liu, Chengyu; Chattopadhyay, Samiran; Aslam, Nauman; Lim, Chee Peng

    2017-06-01

    Detection of the R-peak pertaining to the QRS complex of an ECG signal plays an important role for the diagnosis of a patient's heart condition. To accurately identify the QRS locations from the acquired raw ECG signals, we need to handle a number of challenges, which include noise, baseline wander, varying peak amplitudes, and signal abnormality. This research aims to address these challenges by developing an efficient lightweight algorithm for QRS (i.e., R-peak) detection from raw ECG signals. A lightweight real-time sliding window-based Max-Min Difference (MMD) algorithm for QRS detection from Lead II ECG signals is proposed. Targeting to achieve the best trade-off between computational efficiency and detection accuracy, the proposed algorithm consists of five key steps for QRS detection, namely, baseline correction, MMD curve generation, dynamic threshold computation, R-peak detection, and error correction. Five annotated databases from Physionet are used for evaluating the proposed algorithm in R-peak detection. Integrated with a feature extraction technique and a neural network classifier, the proposed ORS detection algorithm has also been extended to undertake normal and abnormal heartbeat detection from ECG signals. The proposed algorithm exhibits a high degree of robustness in QRS detection and achieves an average sensitivity of 99.62% and an average positive predictivity of 99.67%. Its performance compares favorably with those from the existing state-of-the-art models reported in the literature. In regards to normal and abnormal heartbeat detection, the proposed QRS detection algorithm in combination with the feature extraction technique and neural network classifier achieves an overall accuracy rate of 93.44% based on an empirical evaluation using the MIT-BIH Arrhythmia data set with 10-fold cross validation. In comparison with other related studies, the proposed algorithm offers a lightweight adaptive alternative for R-peak detection with good computational efficiency. The empirical results indicate that it not only yields a high accuracy rate in QRS detection, but also exhibits efficient computational complexity at the order of O(n), where n is the length of an ECG signal. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Adaptive control based on retrospective cost optimization

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S. (Inventor); Santillo, Mario A. (Inventor)

    2012-01-01

    A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.

  8. Matched-filter algorithm for subpixel spectral detection in hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Borough, Howard C.

    1991-11-01

    Hyperspectral imagery, spatial imagery with associated wavelength data for every pixel, offers a significant potential for improved detection and identification of certain classes of targets. The ability to make spectral identifications of objects which only partially fill a single pixel (due to range or small size) is of considerable interest. Multiband imagery such as Landsat's 5 and 7 band imagery has demonstrated significant utility in the past. Hyperspectral imaging systems with hundreds of spectral bands offer improved performance. To explore the application of differentpixel spectral detection algorithms a synthesized set of hyperspectral image data (hypercubes) was generated utilizing NASA earth resources and other spectral data. The data was modified using LOWTRAN 7 to model the illumination, atmospheric contributions, attenuations and viewing geometry to represent a nadir view from 10,000 ft. altitude. The base hypercube (HC) represented 16 by 21 spatial pixels with 101 wavelength samples from 0.5 to 2.5 micrometers for each pixel. Insertions were made into the base data to provide random location, random pixel percentage, and random material. Fifteen different hypercubes were generated for blind testing of candidate algorithms. An algorithm utilizing a matched filter in the spectral dimension proved surprisingly good yielding 100% detections for pixels filled greater than 40% with a standard camouflage paint, and a 50% probability of detection for pixels filled 20% with the paint, with no false alarms. The false alarm rate as a function of the number of spectral bands in the range from 101 to 12 bands was measured and found to increase from zero to 50% illustrating the value of a large number of spectral bands. This test was on imagery without system noise; the next step is to incorporate typical system noise sources.

  9. How do we choose the best model? The impact of cross-validation design on model evaluation for buried threat detection in ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Malof, Jordan M.; Reichman, Daniël.; Collins, Leslie M.

    2018-04-01

    A great deal of research has been focused on the development of computer algorithms for buried threat detection (BTD) in ground penetrating radar (GPR) data. Most recently proposed BTD algorithms are supervised, and therefore they employ machine learning models that infer their parameters using training data. Cross-validation (CV) is a popular method for evaluating the performance of such algorithms, in which the available data is systematically split into ܰ disjoint subsets, and an algorithm is repeatedly trained on ܰ-1 subsets and tested on the excluded subset. There are several common types of CV in BTD, which vary principally upon the spatial criterion used to partition the data: site-based, lane-based, region-based, etc. The performance metrics obtained via CV are often used to suggest the superiority of one model over others, however, most studies utilize just one type of CV, and the impact of this choice is unclear. Here we employ several types of CV to evaluate algorithms from a recent large-scale BTD study. The results indicate that the rank-order of the performance of the algorithms varies substantially depending upon which type of CV is used. For example, the rank-1 algorithm for region-based CV is the lowest ranked algorithm for site-based CV. This suggests that any algorithm results should be interpreted carefully with respect to the type of CV employed. We discuss some potential interpretations of performance, given a particular type of CV.

  10. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  11. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI).

    PubMed

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-07-07

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting.

  12. Model selection for anomaly detection

    NASA Astrophysics Data System (ADS)

    Burnaev, E.; Erofeev, P.; Smolyakov, D.

    2015-12-01

    Anomaly detection based on one-class classification algorithms is broadly used in many applied domains like image processing (e.g. detection of whether a patient is "cancerous" or "healthy" from mammography image), network intrusion detection, etc. Performance of an anomaly detection algorithm crucially depends on a kernel, used to measure similarity in a feature space. The standard approaches (e.g. cross-validation) for kernel selection, used in two-class classification problems, can not be used directly due to the specific nature of a data (absence of a second, abnormal, class data). In this paper we generalize several kernel selection methods from binary-class case to the case of one-class classification and perform extensive comparison of these approaches using both synthetic and real-world data.

  13. Screening for Human Immunodeficiency Virus, Hepatitis B Virus, Hepatitis C Virus, and Treponema pallidum by Blood Testing Using a Bio-Flash Technology-Based Algorithm before Gastrointestinal Endoscopy

    PubMed Central

    Zhen, Chen; QuiuLi, Zhang; YuanQi, An; Casado, Verónica Vocero; Fan, Yuan

    2016-01-01

    Currently, conventional enzyme immunoassays which use manual gold immunoassays and colloidal tests (GICTs) are used as screening tools to detect Treponema pallidum (syphilis), hepatitis B virus (HBV), hepatitis C virus (HCV), human immunodeficiency virus type 1 (HIV-1), and HIV-2 in patients undergoing surgery. The present observational, cross-sectional study compared the sensitivity, specificity, and work flow characteristics of the conventional algorithm with manual GICTs with those of a newly proposed algorithm that uses the automated Bio-Flash technology as a screening tool in patients undergoing gastrointestinal (GI) endoscopy. A total of 956 patients were examined for the presence of serological markers of infection with HIV-1/2, HCV, HBV, and T. pallidum. The proposed algorithm with the Bio-Flash technology was superior for the detection of all markers (100.0% sensitivity and specificity for detection of anti-HIV and anti-HCV antibodies, HBV surface antigen [HBsAg], and T. pallidum) compared with the conventional algorithm based on the manual method (80.0% sensitivity and 98.6% specificity for the detection of anti-HIV, 75.0% sensitivity for the detection of anti-HCV, 94.7% sensitivity for the detection of HBsAg, and 100% specificity for the detection of anti-HCV and HBsAg) in these patients. The automated Bio-Flash technology-based screening algorithm also reduced the operation time by 85.0% (205 min) per day, saving up to 24 h/week. In conclusion, the use of the newly proposed screening algorithm based on the automated Bio-Flash technology can provide an advantage over the use of conventional algorithms based on manual methods for screening for HIV, HBV, HCV, and syphilis before GI endoscopy. PMID:27707942

  14. Screening for Human Immunodeficiency Virus, Hepatitis B Virus, Hepatitis C Virus, and Treponema pallidum by Blood Testing Using a Bio-Flash Technology-Based Algorithm before Gastrointestinal Endoscopy.

    PubMed

    Jun, Zhou; Zhen, Chen; QuiuLi, Zhang; YuanQi, An; Casado, Verónica Vocero; Fan, Yuan

    2016-12-01

    Currently, conventional enzyme immunoassays which use manual gold immunoassays and colloidal tests (GICTs) are used as screening tools to detect Treponema pallidum (syphilis), hepatitis B virus (HBV), hepatitis C virus (HCV), human immunodeficiency virus type 1 (HIV-1), and HIV-2 in patients undergoing surgery. The present observational, cross-sectional study compared the sensitivity, specificity, and work flow characteristics of the conventional algorithm with manual GICTs with those of a newly proposed algorithm that uses the automated Bio-Flash technology as a screening tool in patients undergoing gastrointestinal (GI) endoscopy. A total of 956 patients were examined for the presence of serological markers of infection with HIV-1/2, HCV, HBV, and T. pallidum The proposed algorithm with the Bio-Flash technology was superior for the detection of all markers (100.0% sensitivity and specificity for detection of anti-HIV and anti-HCV antibodies, HBV surface antigen [HBsAg], and T. pallidum) compared with the conventional algorithm based on the manual method (80.0% sensitivity and 98.6% specificity for the detection of anti-HIV, 75.0% sensitivity for the detection of anti-HCV, 94.7% sensitivity for the detection of HBsAg, and 100% specificity for the detection of anti-HCV and HBsAg) in these patients. The automated Bio-Flash technology-based screening algorithm also reduced the operation time by 85.0% (205 min) per day, saving up to 24 h/week. In conclusion, the use of the newly proposed screening algorithm based on the automated Bio-Flash technology can provide an advantage over the use of conventional algorithms based on manual methods for screening for HIV, HBV, HCV, and syphilis before GI endoscopy. Copyright © 2016 Jun et al.

  15. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  16. Noise reduction algorithm with the soft thresholding based on the Shannon entropy and bone-conduction speech cross- correlation bands.

    PubMed

    Na, Sung Dae; Wei, Qun; Seong, Ki Woong; Cho, Jin Ho; Kim, Myoung Nam

    2018-01-01

    The conventional methods of speech enhancement, noise reduction, and voice activity detection are based on the suppression of noise or non-speech components of the target air-conduction signals. However, air-conduced speech is hard to differentiate from babble or white noise signals. To overcome this problem, the proposed algorithm uses the bone-conduction speech signals and soft thresholding based on the Shannon entropy principle and cross-correlation of air- and bone-conduction signals. A new algorithm for speech detection and noise reduction is proposed, which makes use of the Shannon entropy principle and cross-correlation with the bone-conduction speech signals to threshold the wavelet packet coefficients of the noisy speech. The proposed method can be get efficient result by objective quality measure that are PESQ, RMSE, Correlation, SNR. Each threshold is generated by the entropy and cross-correlation approaches in the decomposed bands using the wavelet packet decomposition. As a result, the noise is reduced by the proposed method using the MATLAB simulation. To verify the method feasibility, we compared the air- and bone-conduction speech signals and their spectra by the proposed method. As a result, high performance of the proposed method is confirmed, which makes it quite instrumental to future applications in communication devices, noisy environment, construction, and military operations.

  17. Feature Detection and Curve Fitting Using Fast Walsh Transforms for Shock Tracking: Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2017-01-01

    Walsh functions form an orthonormal basis set consisting of square waves. Square waves make the system well suited for detecting and representing functions with discontinuities. Given a uniform distribution of 2p cells on a one-dimensional element, it has been proven that the inner product of the Walsh Root function for group p with every polynomial of degree < or = (p - 1) across the element is identically zero. It has also been proven that the magnitude and location of a discontinuous jump, as represented by a Heaviside function, are explicitly identified by its Fast Walsh Transform (FWT) coefficients. These two proofs enable an algorithm that quickly provides a Weighted Least Squares fit to distributions across the element that include a discontinuity. The detection of a discontinuity enables analytic relations to locally describe its evolution and provide increased accuracy. Time accurate examples are provided for advection, Burgers equation, and Riemann problems (diaphragm burst) in closed tubes and de Laval nozzles. New algorithms to detect up to two C0 and/or C1 discontinuities within a single element are developed for application to the Riemann problem, in which a contact discontinuity and shock wave form after the diaphragm bursts.

  18. Automated Detection of Firearms and Knives in a CCTV Image

    PubMed Central

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims. PMID:26729128

  19. Automated Detection of Firearms and Knives in a CCTV Image.

    PubMed

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  20. Development of a new time domain-based algorithm for train detection and axle counting

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.

    2015-12-01

    This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.

  1. Comparison of machine-learning algorithms to build a predictive model for detecting undiagnosed diabetes - ELSA-Brasil: accuracy study.

    PubMed

    Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow

    2017-01-01

    Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.

  2. Electron-electron interaction in ion-atom collisions studied by projectile state-resolved Auger-electron spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dohyung Lee.

    This dissertation addresses the problem of dynamic electron-electron interactions in fast ion-atom collisions using projectile Auger electron spectroscopy. The study was carried out by measuring high-resolution projectile KKL Auger electron spectra as a function of projectile energy for the various collision systems of 0.25-2 MeV/u O{sup q+} and F{sup q+} incident on H{sub 2} and He targets. The electrons were detected in the beam direction, where the kinematic broadening is minimized. A zero-degree tandem electron spectrometer system, was developed and showed the versatility of zero-degree measurements of collisionally-produced atomic states. The zero-degree binary encounter electrons (BEe), quasifree target electrons ionizedmore » by the projectiles in head-on collisions, were observed as a strong background in the KLL Auger electron spectrum. They were studied by treating the target ionization as 180{degree} Rutherford elastic scattering in the projectile frame, and resulted in a validity test of the impulse approximation (IA) and a way to determine the spectrometer efficiency. An anomalous q-dependence, in which the zero-degree BEe yields increase with decreasing projectile charge state (q), was observed. State-resolved KLL Auger cross sections were determined by using the BEe normalization and thus the cross section of the electron-electron interactions such as resonant transfer-excitation (RTE), electron-electron excitation (eeE), and electron-electron ionization (eeI) were determined. Projectile 2l capture with 1s {yields} 2p excitation by the captured target electron was observed as an RTE process with Li-like and He-like projectiles and the measured RTEA (RTE followed by Auger decay) cross sections showed good agreement with an RTE-IA treatment and RTE alignment theory.« less

  3. Measurement of the muon antineutrino double-differential cross section for quasielastic-like scattering on hydrocarbon at E ν ~ 3.5 GeV

    DOE PAGES

    Patrick, C. E.; Aliaga, L.; Bashyal, A.; ...

    2018-03-08

    We present double-differential measurements of antineutrino charged-current quasielastic scattering in the MINERvA detector. This study improves on a previous single-differential measurement by using updated reconstruction algorithms and interaction models and provides a complete description of observed muon kinematics in the form of a double-differential cross section with respect to muon transverse and longitudinal momentum. We also include in our signal definition, zero-meson final states arising from multinucleon interactions and from resonant pion production followed by pion absorption in the primary nucleus. We find that model agreement is considerably improved by a model tuned to MINERvA inclusive neutrino scattering data thatmore » incorporates nuclear effects such as weak nuclear screening and two-particle, two-hole enhancements.« less

  4. A Distributed Wireless Camera System for the Management of Parking Spaces

    PubMed Central

    Melničuk, Petr

    2017-01-01

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces. PMID:29283371

  5. Runway Safety Monitor Algorithm for Single and Crossing Runway Incursion Detection and Alerting

    NASA Technical Reports Server (NTRS)

    Green, David F., Jr.

    2006-01-01

    The Runway Safety Monitor (RSM) is an aircraft based algorithm for runway incursion detection and alerting that was developed in support of NASA's Runway Incursion Prevention System (RIPS) research conducted under the NASA Aviation Safety and Security Program's Synthetic Vision System project. The RSM algorithm provides warnings of runway incursions in sufficient time for pilots to take evasive action and avoid accidents during landings, takeoffs or when taxiing on the runway. The report documents the RSM software and describes in detail how RSM performs runway incursion detection and alerting functions for NASA RIPS. The report also describes the RIPS flight tests conducted at the Reno/Tahoe International Airport (RNO) and the Wallops Flight Facility (WAL) during July and August of 2004, and the RSM performance results and lessons learned from those flight tests.

  6. Structural damage continuous monitoring by using a data driven approach based on principal component analysis and cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid

    2017-05-01

    Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.

  7. Probing structures of large protein complexes using zero-length cross-linking.

    PubMed

    Rivera-Santiago, Roland F; Sriswasdi, Sira; Harper, Sandra L; Speicher, David W

    2015-11-01

    Structural mass spectrometry (MS) is a field with growing applicability for addressing complex biophysical questions regarding proteins and protein complexes. One of the major structural MS approaches involves the use of chemical cross-linking coupled with MS analysis (CX-MS) to identify proximal sites within macromolecules. Identified cross-linked sites can be used to probe novel protein-protein interactions or the derived distance constraints can be used to verify and refine molecular models. This review focuses on recent advances of "zero-length" cross-linking. Zero-length cross-linking reagents do not add any atoms to the cross-linked species due to the lack of a spacer arm. This provides a major advantage in the form of providing more precise distance constraints as the cross-linkable groups must be within salt bridge distances in order to react. However, identification of cross-linked peptides using these reagents presents unique challenges. We discuss recent efforts by our group to minimize these challenges by using multiple cycles of LC-MS/MS analysis and software specifically developed and optimized for identification of zero-length cross-linked peptides. Representative data utilizing our current protocol are presented and discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Quantification of susceptibility change at high-concentrated SPIO-labeled target by characteristic phase gradient recognition.

    PubMed

    Zhu, Haitao; Nie, Binbin; Liu, Hua; Guo, Hua; Demachi, Kazuyuki; Sekino, Masaki; Shan, Baoci

    2016-05-01

    Phase map cross-correlation detection and quantification may produce highlighted signal at superparamagnetic iron oxide nanoparticles, and distinguish them from other hypointensities. The method may quantify susceptibility change by performing least squares analysis between a theoretically generated magnetic field template and an experimentally scanned phase image. Because characteristic phase recognition requires the removal of phase wrap and phase background, additional steps of phase unwrapping and filtering may increase the chance of computing error and enlarge the inconsistence among algorithms. To solve problem, phase gradient cross-correlation and quantification method is developed by recognizing characteristic phase gradient pattern instead of phase image because phase gradient operation inherently includes unwrapping and filtering functions. However, few studies have mentioned the detectable limit of currently used phase gradient calculation algorithms. The limit may lead to an underestimation of large magnetic susceptibility change caused by high-concentrated iron accumulation. In this study, mathematical derivation points out the value of maximum detectable phase gradient calculated by differential chain algorithm in both spatial and Fourier domain. To break through the limit, a modified quantification method is proposed by using unwrapped forward differentiation for phase gradient generation. The method enlarges the detectable range of phase gradient measurement and avoids the underestimation of magnetic susceptibility. Simulation and phantom experiments were used to quantitatively compare different methods. In vivo application performs MRI scanning on nude mice implanted by iron-labeled human cancer cells. Results validate the limit of detectable phase gradient and the consequent susceptibility underestimation. Results also demonstrate the advantage of unwrapped forward differentiation compared with differential chain algorithms for susceptibility quantification at high-concentrated iron accumulation. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. An automated cross-correlation based event detection technique and its application to surface passive data set

    USGS Publications Warehouse

    Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike

    2013-01-01

    In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.

  10. Continuous fractional-order Zero Phase Error Tracking Control.

    PubMed

    Liu, Lu; Tian, Siyuan; Xue, Dingyu; Zhang, Tao; Chen, YangQuan

    2018-04-01

    A continuous time fractional-order feedforward control algorithm for tracking desired time varying input signals is proposed in this paper. The presented controller cancels the phase shift caused by the zeros and poles of controlled closed-loop fractional-order system, so it is called Fractional-Order Zero Phase Tracking Controller (FZPETC). The controlled systems are divided into two categories i.e. with and without non-cancellable (non-minimum-phase) zeros which stand in unstable region or on stability boundary. Each kinds of systems has a targeted FZPETC design control strategy. The improved tracking performance has been evaluated successfully by applying the proposed controller to three different kinds of fractional-order controlled systems. Besides, a modified quasi-perfect tracking scheme is presented for those systems which may not have available future tracking trajectory information or have problem in high frequency disturbance rejection if the perfect tracking algorithm is applied. A simulation comparison and a hardware-in-the-loop thermal peltier platform are shown to validate the practicality of the proposed quasi-perfect control algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Solving the patient zero inverse problem by using generalized simulated annealing

    NASA Astrophysics Data System (ADS)

    Menin, Olavo H.; Bauch, Chris T.

    2018-01-01

    Identifying patient zero - the initially infected source of a given outbreak - is an important step in epidemiological investigations of both existing and emerging infectious diseases. Here, the use of the Generalized Simulated Annealing algorithm (GSA) to solve the inverse problem of finding the source of an outbreak is studied. The classical disease natural histories susceptible-infected (SI), susceptible-infected-susceptible (SIS), susceptible-infected-recovered (SIR) and susceptible-infected-recovered-susceptible (SIRS) in a regular lattice are addressed. Both the position of patient zero and its time of infection are considered unknown. The algorithm performance with respect to the generalization parameter q˜v and the fraction ρ of infected nodes for whom infection was ascertained is assessed. Numerical experiments show the algorithm is able to retrieve the epidemic source with good accuracy, even when ρ is small, but present no evidence to support that GSA performs better than its classical version. Our results suggest that simulated annealing could be a helpful tool for identifying patient zero in an outbreak where not all cases can be ascertained.

  12. Eliminating the zero spectrum in Fourier transform profilometry using empirical mode decomposition.

    PubMed

    Li, Sikun; Su, Xianyu; Chen, Wenjing; Xiang, Liqun

    2009-05-01

    Empirical mode decomposition is introduced into Fourier transform profilometry to extract the zero spectrum included in the deformed fringe pattern without the need for capturing two fringe patterns with pi phase difference. The fringe pattern is subsequently demodulated using a standard Fourier transform profilometry algorithm. With this method, the deformed fringe pattern is adaptively decomposed into a finite number of intrinsic mode functions that vary from high frequency to low frequency by means of an algorithm referred to as a sifting process. Then the zero spectrum is separated from the high-frequency components effectively. Experiments validate the feasibility of this method.

  13. Successive Projections Algorithm-Multivariable Linear Regression Classifier for the Detection of Contaminants on Chicken Carcasses in Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Wu, W.; Chen, G. Y.; Kang, R.; Xia, J. C.; Huang, Y. P.; Chen, K. J.

    2017-07-01

    During slaughtering and further processing, chicken carcasses are inevitably contaminated by microbial pathogen contaminants. Due to food safety concerns, many countries implement a zero-tolerance policy that forbids the placement of visibly contaminated carcasses in ice-water chiller tanks during processing. Manual detection of contaminants is labor consuming and imprecise. Here, a successive projections algorithm (SPA)-multivariable linear regression (MLR) classifier based on an optimal performance threshold was developed for automatic detection of contaminants on chicken carcasses. Hyperspectral images were obtained using a hyperspectral imaging system. A regression model of the classifier was established by MLR based on twelve characteristic wavelengths (505, 537, 561, 562, 564, 575, 604, 627, 656, 665, 670, and 689 nm) selected by SPA , and the optimal threshold T = 1 was obtained from the receiver operating characteristic (ROC) analysis. The SPA-MLR classifier provided the best detection results when compared with the SPA-partial least squares (PLS) regression classifier and the SPA-least squares supported vector machine (LS-SVM) classifier. The true positive rate (TPR) of 100% and the false positive rate (FPR) of 0.392% indicate that the SPA-MLR classifier can utilize spatial and spectral information to effectively detect contaminants on chicken carcasses.

  14. Characterization of binary string statistics for syntactic landmine detection

    NASA Astrophysics Data System (ADS)

    Nasif, Ahmed O.; Mark, Brian L.; Hintz, Kenneth J.

    2011-06-01

    Syntactic landmine detection has been proposed to detect and classify non-metallic landmines using ground penetrating radar (GPR). In this approach, the GPR return is processed to extract characteristic binary strings for landmine and clutter discrimination. In our previous work, we discussed the preprocessing methodology by which the amplitude information of the GPR A-scan signal can be effectively converted into binary strings, which identify the impedance discontinuities in the signal. In this work, we study the statistical properties of the binary string space. In particular, we develop a Markov chain model to characterize the observed bit sequence of the binary strings. The state is defined as the number of consecutive zeros between two ones in the binarized A-scans. Since the strings are highly sparse (the number of zeros is much greater than the number of ones), defining the state this way leads to fewer number of states compared to the case where each bit is defined as a state. The number of total states is further reduced by quantizing the number of consecutive zeros. In order to identify the correct order of the Markov model, the mean square difference (MSD) between the transition matrices of mine strings and non-mine strings is calculated up to order four using training data. The results show that order one or two maximizes this MSD. The specification of the transition probabilities of the chain can be used to compute the likelihood of any given string. Such a model can be used to identify characteristic landmine strings during the training phase. These developments on modeling and characterizing the string statistics can potentially be part of a real-time landmine detection algorithm that identifies landmine and clutter in an adaptive fashion.

  15. Adding a Zero-Crossing Count to Spectral Information in Template-Based Speech Recognition

    DTIC Science & Technology

    1982-01-01

    incorporation of zero-crossing information into the spectral representation used in a template-matching system ( cIcADA ). An analysis of zero-crossing data for an...procedure to be used. The work described in this paper was done using the CICADA system developed at Carnegie-Mellon University [Alleva 81, Waibel 801... CICADA uses a representation based on a compression of the short-term spectrum according to a 16 coefficient mel scale. Let us consider the CICADA

  16. Semiautomated analysis of optical coherence tomography crystalline lens images under simulated accommodation

    PubMed Central

    Kim, Eon; Ehrmann, Klaus; Uhlhorn, Stephen; Borja, David; Arrieta-Quintero, Esdras; Parel, Jean-Marie

    2011-01-01

    Presbyopia is an age related, gradual loss of accommodation, mainly due to changes in the crystalline lens. As part of research efforts to understand and cure this condition, ex vivo, cross-sectional optical coherence tomography images of crystalline lenses were obtained by using the Ex-Vivo Accommodation Simulator (EVAS II) instrument and analyzed to extract their physical and optical properties. Various filters and edge detection methods were applied to isolate the edge contour. An ellipse is fitted to the lens outline to obtain central reference point for transforming the pixel data into the analysis coordinate system. This allows for the fitting of a high order equation to obtain a mathematical description of the edge contour, which obeys constraints of continuity as well as zero to infinite surface slopes from apex to equator. Geometrical parameters of the lens were determined for the lens images captured at different accommodative states. Various curve fitting functions were developed to mathematically describe the anterior and posterior surfaces of the lens. Their differences were evaluated and their suitability for extracting optical performance of the lens was assessed. The robustness of these algorithms was tested by analyzing the same images repeated times. PMID:21639571

  17. Semiautomated analysis of optical coherence tomography crystalline lens images under simulated accommodation.

    PubMed

    Kim, Eon; Ehrmann, Klaus; Uhlhorn, Stephen; Borja, David; Arrieta-Quintero, Esdras; Parel, Jean-Marie

    2011-05-01

    Presbyopia is an age related, gradual loss of accommodation, mainly due to changes in the crystalline lens. As part of research efforts to understand and cure this condition, ex vivo, cross-sectional optical coherence tomography images of crystalline lenses were obtained by using the Ex-Vivo Accommodation Simulator (EVAS II) instrument and analyzed to extract their physical and optical properties. Various filters and edge detection methods were applied to isolate the edge contour. An ellipse is fitted to the lens outline to obtain central reference point for transforming the pixel data into the analysis coordinate system. This allows for the fitting of a high order equation to obtain a mathematical description of the edge contour, which obeys constraints of continuity as well as zero to infinite surface slopes from apex to equator. Geometrical parameters of the lens were determined for the lens images captured at different accommodative states. Various curve fitting functions were developed to mathematically describe the anterior and posterior surfaces of the lens. Their differences were evaluated and their suitability for extracting optical performance of the lens was assessed. The robustness of these algorithms was tested by analyzing the same images repeated times.

  18. Compressive Sensing of Foot Gait Signals and Its Application for the Estimation of Clinically Relevant Time Series.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2016-07-01

    A new signal reconstruction algorithm for compressive sensing based on the minimization of a pseudonorm which promotes block-sparse structure on the first-order difference of the signal is proposed. Involved optimization is carried out by using a sequential version of Fletcher-Reeves' conjugate-gradient algorithm, and the line search is based on Banach's fixed-point theorem. The algorithm is suitable for the reconstruction of foot gait signals which admit block-sparse structure on the first-order difference. An additional algorithm for the estimation of stride-interval, swing-interval, and stance-interval time series from the reconstructed foot gait signals is also proposed. This algorithm is based on finding zero crossing indices of the foot gait signal and using the resulting indices for the computation of time series. Extensive simulation results demonstrate that the proposed signal reconstruction algorithm yields improved signal-to-noise ratio and requires significantly reduced computational effort relative to several competing algorithms over a wide range of compression ratio. For a compression ratio in the range from 88% to 94%, the proposed algorithm is found to offer improved accuracy for the estimation of clinically relevant time-series parameters, namely, the mean value, variance, and spectral index of stride-interval, stance-interval, and swing-interval time series, relative to its nearest competitor algorithm. The improvement in performance for compression ratio as high as 94% indicates that the proposed algorithms would be useful for designing compressive sensing-based systems for long-term telemonitoring of human gait signals.

  19. Negative-ion formation in the explosives RDX, PETN, and TNT using the Reversal Electron Attachment Detection (READ) technique

    NASA Technical Reports Server (NTRS)

    Chutijian, Ara; Boumsellek, S.; Alajajian, S. H.

    1992-01-01

    In the search for high sensitivity and direct atmospheric sampling of trace species, techniques have been developed such as atmospheric-sampling, glow-discharge ionization (ASGDI), corona discharge, atmospheric pressure ionization (API), electron-capture detection (ECD), and negative-ion chemical ionization (NICI) that are capable of detecting parts-per-billion to parts-per-trillion concentrations of trace species. These techniques are based on positive- or negative-ion formation via charge-transfer to the target, or electron capture under multiple-collision conditions in a Maxwellian distribution of electron energies at the source temperature. One drawback of the high-pressure, corona- or glow-discharge devices is that they are susceptible to interferences either through indistinguishable product masses, or through undesired ion-molecule reactions. The ASGDI technique is relatively immune from such interferences, since at target concentrations of less than 1 ppm the majority of negative ions arises via electron capture rather than through ion-molecule chemistry. A drawback of the conventional ECD, and possibly of the ASGDI, is that they exhibit vanishingly small densities of electrons with energies in the range 0-10 millielectron volts (meV), as can be seen from a typical Maxwellian electron energy distribution function at T = 300 K. Slowing the electrons to these subthermal (less than 10 meV) energies is crucial, since the cross section for attachment of several large classes of molecules is known to increase to values larger than 10(exp -12) sq cm at near-zero electron energies. In the limit of zero energy these cross sections are predicted to diverge as epsilon(exp -1/2), where epsilon is the electron energy. In order to provide a better 'match' between the electron energy distribution function and attachment cross section, a new concept of attachment in an electrostatic mirror was developed. In this scheme, electrons are brought to a momentary halt by reversing their direction with electrostatic fields. At this turning point the electrons have zero or near-zero energy. A beam of target molecules is introduced, and the resultant negative ions extracted. This basic idea has been recently improved to allow for better reversal geometry, higher electron currents, lower backgrounds, and increased negative-ion extraction efficiency. We present herein application of the so-called reversal electron attachment detector (READ) to the study of negative-ion formation in the explosives molecules RDX, PETN, and TNT under single-collision conditions.

  20. A new algorithm and system for the characterization of handwriting strokes with delta-lognormal parameters.

    PubMed

    Djioua, Moussa; Plamondon, Réjean

    2009-11-01

    In this paper, we present a new analytical method for estimating the parameters of Delta-Lognormal functions and characterizing handwriting strokes. According to the Kinematic Theory of rapid human movements, these parameters contain information on both the motor commands and the timing properties of a neuromuscular system. The new algorithm, called XZERO, exploits relationships between the zero crossings of the first and second time derivatives of a lognormal function and its four basic parameters. The methodology is described and then evaluated under various testing conditions. The new tool allows a greater variety of stroke patterns to be processed automatically. Furthermore, for the first time, the extraction accuracy is quantified empirically, taking advantage of the exponential relationships that link the dispersion of the extraction errors with its signal-to-noise ratio. A new extraction system which combines this algorithm with two other previously published methods is also described and evaluated. This system provides researchers involved in various domains of pattern analysis and artificial intelligence with new tools for the basic study of single strokes as primitives for understanding rapid human movements.

  1. Isobaric Reconstruction of the Baryonic Acoustic Oscillation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Yu, Hao-Ran; Zhu, Hong-Ming; Yu, Yu; Pan, Qiaoyin; Pen, Ue-Li

    2017-06-01

    In this Letter, we report a significant recovery of the linear baryonic acoustic oscillation (BAO) signature by applying the isobaric reconstruction algorithm to the nonlinear matter density field. Assuming only the longitudinal component of the displacement being cosmologically relevant, this algorithm iteratively solves the coordinate transform between the Lagrangian and Eulerian frames without requiring any specific knowledge of the dynamics. For dark matter field, it produces the nonlinear displacement potential with very high fidelity. The reconstruction error at the pixel level is within a few percent and is caused only by the emergence of the transverse component after the shell-crossing. As it circumvents the strongest nonlinearity of the density evolution, the reconstructed field is well described by linear theory and immune from the bulk-flow smearing of the BAO signature. Therefore, this algorithm could significantly improve the measurement accuracy of the sound horizon scale s. For a perfect large-scale structure survey at redshift zero without Poisson or instrumental noise, the fractional error {{Δ }}s/s is reduced by a factor of ˜2.7, very close to the ideal limit with the linear power spectrum and Gaussian covariance matrix.

  2. A parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1993-01-01

    A parallel algorithm, called polysection, is presented for computing the eigenvalues of a symmetric tridiagonal matrix. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The signs of the polynomials at the interval endpoints are determined a priori and used to guarantee that all zeros are found. The use of finite-precision arithmetic may result in multiple zeros; however, in this case, the intervals coalesce and their number determines exactly the multiplicity of the zero. For an N x N matrix the eigenvalues can be determined in O(log-squared N) time with N-squared processors and O(N) time with N processors. The method is compared with a parallel variant of bisection that requires O(N-squared) time on a single processor, O(N) time with N processors, and O(log N) time with N-squared processors.

  3. MUSIC-type imaging of small perfectly conducting cracks with an unknown frequency

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang

    2015-09-01

    MUltiple SIgnal Classification (MUSIC) is a famous non-iterative detection algorithm in inverse scattering problems. However, when the applied frequency is unknown, inaccurate locations are identified via MUSIC. This fact has been confirmed through numerical simulations. However, the reason behind this phenomenon has not been investigated theoretically. Motivated by this fact, we identify the structure of MUSIC-type imaging functionals with unknown frequency, by establishing a relationship with Bessel functions of order zero of the first kind. Through this, we can explain why inaccurate results appear.

  4. Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images

    PubMed Central

    Srinivasan, Pratul P.; Kim, Leo A.; Mettu, Priyatham S.; Cousins, Scott W.; Comer, Grant M.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    We present a novel fully automated algorithm for the detection of retinal diseases via optical coherence tomography (OCT) imaging. Our algorithm utilizes multiscale histograms of oriented gradient descriptors as feature vectors of a support vector machine based classifier. The spectral domain OCT data sets used for cross-validation consisted of volumetric scans acquired from 45 subjects: 15 normal subjects, 15 patients with dry age-related macular degeneration (AMD), and 15 patients with diabetic macular edema (DME). Our classifier correctly identified 100% of cases with AMD, 100% cases with DME, and 86.67% cases of normal subjects. This algorithm is a potentially impactful tool for the remote diagnosis of ophthalmic diseases. PMID:25360373

  5. Real-time micro-vibration multi-spot synchronous measurement within a region based on heterodyne interference

    NASA Astrophysics Data System (ADS)

    Lan, Ma; Xiao, Wen; Chen, Zonghui; Hao, Hongliang; Pan, Feng

    2018-01-01

    Real-time micro-vibration measurement is widely used in engineering applications. It is very difficult for traditional optical detection methods to achieve real-time need in a relatively high frequency and multi-spot synchronous measurement of a region at the same time,especially at the nanoscale. Based on the method of heterodyne interference, an experimental system of real-time measurement of micro - vibration is constructed to satisfy the demand in engineering applications. The vibration response signal is measured by combing optical heterodyne interferometry and a high-speed CMOS-DVR image acquisition system. Then, by extracting and processing multiple pixels at the same time, four digital demodulation technique are implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. Different kinds of demodulation algorithms are analyzed and the results show that these four demodulation algorithms are suitable for different interference signals. Both autocorrelation algorithm and cross-correlation algorithm meet the needs of real-time measurements. The autocorrelation algorithm demodulates the frequency more accurately, while the cross-correlation algorithm is more accurate in solving the amplitude.

  6. Development of an inverse distance weighted active infrared stealth scheme using the repulsive particle swarm optimization algorithm.

    PubMed

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk

    2018-04-20

    Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.

  7. Influence of Fiber Bragg Grating Spectrum Degradation on the Performance of Sensor Interrogation Algorithms

    PubMed Central

    Lamberti, Alfredo; Vanlanduit, Steve; De Pauw, Ben; Berghmans, Francis

    2014-01-01

    The working principle of fiber Bragg grating (FBG) sensors is mostly based on the tracking of the Bragg wavelength shift. To accomplish this task, different algorithms have been proposed, from conventional maximum and centroid detection algorithms to more recently-developed correlation-based techniques. Several studies regarding the performance of these algorithms have been conducted, but they did not take into account spectral distortions, which appear in many practical applications. This paper addresses this issue and analyzes the performance of four different wavelength tracking algorithms (maximum detection, centroid detection, cross-correlation and fast phase-correlation) when applied to distorted FBG spectra used for measuring dynamic loads. Both simulations and experiments are used for the analyses. The dynamic behavior of distorted FBG spectra is simulated using the transfer-matrix approach, and the amount of distortion of the spectra is quantified using dedicated distortion indices. The algorithms are compared in terms of achievable precision and accuracy. To corroborate the simulation results, experiments were conducted using three FBG sensors glued on a steel plate and subjected to a combination of transverse force and vibration loads. The analysis of the results showed that the fast phase-correlation algorithm guarantees the best combination of versatility, precision and accuracy. PMID:25521386

  8. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany

    PubMed Central

    Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun

    2017-01-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498

  9. Ultra-sensitive probe of spectral line structure and detection of isotopic oxygen

    NASA Astrophysics Data System (ADS)

    Garner, Richard M.; Dharamsi, A. N.; Khan, M. Amir

    2018-01-01

    We discuss a new method of investigating and obtaining quantitative behavior of higher harmonic (> 2f) wavelength modulation spectroscopy (WMS) based on the signal structure. It is shown that the spectral structure of higher harmonic WMS signals, quantified by the number of zero crossings and turnings points, can have increased sensitivity to ambient conditions or line-broadening effects from changes in temperature, pressure, or optical depth. The structure of WMS signals, characterized by combinations of signal magnitude and spectral locations of turning points and zero crossings, provides a unique scale that quantifies lineshape parameters and, thus, useful in optimization of measurements obtained from multi-harmonic WMS signals. We demonstrate this by detecting weaker rotational-vibrational transitions of isotopic atmospheric oxygen (16O18O) in the near-infrared region where higher harmonic WMS signals are more sensitive contrary to their signal-to-noise ratio considerations. The proposed approach based on spectral structure provides the ability to investigate and quantify signals not only at linecenter but also in the wing region of the absorption profile. This formulation is particularly useful in tunable diode laser spectroscopy and ultra-precision laser-based sensors where absorption signal profile carries information of quantities of interest, e.g., concentration, velocity, or gas collision dynamics, etc.

  10. Cross-Dependency Inference in Multi-Layered Networks: A Collaborative Filtering Perspective.

    PubMed

    Chen, Chen; Tong, Hanghang; Xie, Lei; Ying, Lei; He, Qing

    2017-08-01

    The increasingly connected world has catalyzed the fusion of networks from different domains, which facilitates the emergence of a new network model-multi-layered networks. Examples of such kind of network systems include critical infrastructure networks, biological systems, organization-level collaborations, cross-platform e-commerce, and so forth. One crucial structure that distances multi-layered network from other network models is its cross-layer dependency, which describes the associations between the nodes from different layers. Needless to say, the cross-layer dependency in the network plays an essential role in many data mining applications like system robustness analysis and complex network control. However, it remains a daunting task to know the exact dependency relationships due to noise, limited accessibility, and so forth. In this article, we tackle the cross-layer dependency inference problem by modeling it as a collective collaborative filtering problem. Based on this idea, we propose an effective algorithm Fascinate that can reveal unobserved dependencies with linear complexity. Moreover, we derive Fascinate-ZERO, an online variant of Fascinate that can respond to a newly added node timely by checking its neighborhood dependencies. We perform extensive evaluations on real datasets to substantiate the superiority of our proposed approaches.

  11. Coherent Detection of High-Rate Optical PPM Signals

    NASA Technical Reports Server (NTRS)

    Vilnrotter, Victor; Fernandez, Michela Munoz

    2006-01-01

    A method of coherent detection of high-rate pulse-position modulation (PPM) on a received laser beam has been conceived as a means of reducing the deleterious effects of noise and atmospheric turbulence in free-space optical communication using focal-plane detector array technologies. In comparison with a receiver based on direct detection of the intensity modulation of a PPM signal, a receiver based on the present method of coherent detection performs well at much higher background levels. In principle, the coherent-detection receiver can exhibit quantum-limited performance despite atmospheric turbulence. The key components of such a receiver include standard receiver optics, a laser that serves as a local oscillator, a focal-plane array of photodetectors, and a signal-processing and data-acquisition assembly needed to sample the focal-plane fields and reconstruct the pulsed signal prior to detection. The received PPM-modulated laser beam and the local-oscillator beam are focused onto the photodetector array, where they are mixed in the detection process. The two lasers are of the same or nearly the same frequency. If the two lasers are of different frequencies, then the coherent detection process is characterized as heterodyne and, using traditional heterodyne-detection terminology, the difference between the two laser frequencies is denoted the intermediate frequency (IF). If the two laser beams are of the same frequency and remain aligned in phase, then the coherent detection process is characterized as homodyne (essentially, heterodyne detection at zero IF). As a result of the inherent squaring operation of each photodetector, the output current includes an IF component that contains the signal modulation. The amplitude of the IF component is proportional to the product of the local-oscillator signal amplitude and the PPM signal amplitude. Hence, by using a sufficiently strong local-oscillator signal, one can make the PPM-modulated IF signal strong enough to overcome thermal noise in the receiver circuits: this is what makes it possible to achieve near-quantum-limited detection in the presence of strong background. Following quantum-limited coherent detection, the outputs of the individual photodetectors are automatically aligned in phase by use of one or more adaptive array compensation algorithms [e.g., the least-mean-square (LMS) algorithm]. Then the outputs are combined and the resulting signal is processed to extract the high-rate information, as though the PPM signal were received by a single photodetector. In a continuing series of experiments to test this method (see Fig. 1), the local oscillator has a wavelength of 1,064 nm, and another laser is used as a signal transmitter at a slightly different wavelength to establish an IF of about 6 MHz. There are 16 photodetectors in a 4 4 focal-plane array; the detector outputs are digitized at a sampling rate of 25 MHz, and the signals in digital form are combined by use of the LMS algorithm. Convergence of the adaptive combining algorithm in the presence of simulated atmospheric turbulence for optical PPM signals has already been demonstrated in the laboratory; the combined output is shown in Fig. 2(a), and Fig. 2(b) shows the behavior of the phase of the combining weights as a function of time (or samples). We observe that the phase of the weights has a sawtooth shape due to the continuously changing phase in the down-converted output, which is not exactly at zero frequency. Detailed performance analysis of this coherent free-space optical communication system in the presence of simulated atmospheric turbulence is currently under way.

  12. Automatic segmentation of invasive breast carcinomas from dynamic contrast-enhanced MRI using time series analysis.

    PubMed

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva

    2014-08-01

    To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.

  13. Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis

    PubMed Central

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva

    2013-01-01

    Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175

  14. Automated peroperative assessment of stents apposition from OCT pullbacks.

    PubMed

    Dubuisson, Florian; Péry, Emilie; Ouchchane, Lemlih; Combaret, Nicolas; Kauffmann, Claude; Souteyrand, Géraud; Motreff, Pascal; Sarry, Laurent

    2015-04-01

    This study's aim was to control the stents apposition by automatically analyzing endovascular optical coherence tomography (OCT) sequences. Lumen is detected using threshold, morphological and gradient operators to run a Dijkstra algorithm. Wrong detection tagged by the user and caused by bifurcation, struts'presence, thrombotic lesions or dissections can be corrected using a morphing algorithm. Struts are also segmented by computing symmetrical and morphological operators. Euclidian distance between detected struts and wall artery initializes a stent's complete distance map and missing data are interpolated with thin-plate spline functions. Rejection of detected outliers, regularization of parameters by generalized cross-validation and using the one-side cyclic property of the map also optimize accuracy. Several indices computed from the map provide quantitative values of malapposition. Algorithm was run on four in-vivo OCT sequences including different incomplete stent apposition's cases. Comparison with manual expert measurements validates the segmentation׳s accuracy and shows an almost perfect concordance of automated results. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Vision Algorithm for the Solar Aspect System of the High Energy Replicated Optics to Explore the Sun Mission

    NASA Technical Reports Server (NTRS)

    Cramer, Alexander Krishnan

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high- accuracy pitch and yaw pointing solutions relative to the sun on a high altitude balloon. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small cross-shaped fiducial markers. Images of this plate taken with an off-the-shelf camera were processed to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, and identification with an ad-hoc method based on the spacing between fiducials. Performance is verified on real test data where possible, but otherwise uses artificially generated data. Pointing knowledge is ultimately verified to meet the 20 arcsecond requirement.

  16. An underwater turbulence degraded image restoration algorithm

    NASA Astrophysics Data System (ADS)

    Furhad, Md. Hasan; Tahtali, Murat; Lambert, Andrew

    2017-09-01

    Underwater turbulence occurs due to random fluctuations of temperature and salinity in the water. These fluctuations are responsible for variations in water density, refractive index and attenuation. These impose random geometric distortions, spatio-temporal varying blur, limited range visibility and limited contrast on the acquired images. There are some restoration techniques developed to address this problem, such as image registration based, lucky region based and centroid-based image restoration algorithms. Although these methods demonstrate better results in terms of removing turbulence, they require computationally intensive image registration, higher CPU load and memory allocations. Thus, in this paper, a simple patch based dictionary learning algorithm is proposed to restore the image by alleviating the costly image registration step. Dictionary learning is a machine learning technique which builds a dictionary of non-zero atoms derived from the sparse representation of an image or signal. The image is divided into several patches and the sharp patches are detected from them. Next, dictionary learning is performed on these patches to estimate the restored image. Finally, an image deconvolution algorithm is employed on the estimated restored image to remove noise that still exists.

  17. Derivative spectrophotometric analysis of benzophenone (as an impurity) in phenytoin

    PubMed Central

    2011-01-01

    Three simple and rapid spectrophotometric methods were developed for detection and trace determination of benzophenone (the main impurity) in phenytoin bulk powder and pharmaceutical formulations. The first method, zero-crossing first derivative spectrophotometry, depends on measuring the first derivative trough values at 257.6 nm for benzophenone. The second method, zero-crossing third derivative spectrophotometry, depends on measuring the third derivative peak values at 263.2 nm. The third method, ratio first derivative spectrophotometry, depends on measuring the peak amplitudes of the first derivative of the ratio spectra (the spectra of benzophenone divided by the spectrum of 5.0 μg/mL phenytoin solution) at 272 nm. The calibration graphs were linear over the range of 1-10 μg/mL. The detection limits of the first and the third derivative methods were found to be 0.04 μg/mL and 0.11 μg/mL and the quantitation limits were 0.13 μg/mL and 0.34 μg/mL, respectively, while for the ratio derivative method, the detection limit was 0.06 μg/mL and the quantitation limit was 0.18 μg/mL. The proposed methods were applied successfully to the assay of the studied drug in phenytoin bulk powder and certain pharmaceutical preparations. The results were statistically compared to those obtained using a polarographic method and were found to be in good agreement. PMID:22152156

  18. A computational method for detecting copy number variations using scale-space filtering

    PubMed Central

    2013-01-01

    Background As next-generation sequencing technology made rapid and cost-effective sequencing available, the importance of computational approaches in finding and analyzing copy number variations (CNVs) has been amplified. Furthermore, most genome projects need to accurately analyze sequences with fairly low-coverage read data. It is urgently needed to develop a method to detect the exact types and locations of CNVs from low coverage read data. Results Here, we propose a new CNV detection method, CNV_SS, which uses scale-space filtering. The scale-space filtering is evaluated by applying to the read coverage data the Gaussian convolution for various scales according to a given scaling parameter. Next, by differentiating twice and finding zero-crossing points, inflection points of scale-space filtered read coverage data are calculated per scale. Then, the types and the exact locations of CNVs are obtained by analyzing the finger print map, the contours of zero-crossing points for various scales. Conclusions The performance of CNV_SS showed that FNR and FPR stay in the range of 1.27% to 2.43% and 1.14% to 2.44%, respectively, even at a relatively low coverage (0.5x ≤C ≤2x). CNV_SS gave also much more effective results than the conventional methods in the evaluation of FNR, at 3.82% at least and 76.97% at most even when the coverage level of read data is low. CNV_SS source code is freely available from http://dblab.hallym.ac.kr/CNV SS/. PMID:23418726

  19. Estimating the dose response relationship for occupational radiation exposure measured with minimum detection level.

    PubMed

    Xue, Xiaonan; Shore, Roy E; Ye, Xiangyang; Kim, Mimi Y

    2004-10-01

    Occupational exposures are often recorded as zero when the exposure is below the minimum detection level (BMDL). This can lead to an underestimation of the doses received by individuals and can lead to biased estimates of risk in occupational epidemiologic studies. The extent of the exposure underestimation is increased with the magnitude of the minimum detection level (MDL) and the frequency of monitoring. This paper uses multiple imputation methods to impute values for the missing doses due to BMDL. A Gibbs sampling algorithm is developed to implement the method, which is applied to two distinct scenarios: when dose information is available for each measurement (but BMDL is recorded as zero or some other arbitrary value), or when the dose information available represents the summation of a series of measurements (e.g., only yearly cumulative exposure is available but based on, say, weekly measurements). Then the average of the multiple imputed exposure realizations for each individual is used to obtain an unbiased estimate of the relative risk associated with exposure. Simulation studies are used to evaluate the performance of the estimators. As an illustration, the method is applied to a sample of historical occupational radiation exposure data from the Oak Ridge National Laboratory.

  20. Tracking Objects with Networked Scattered Directional Sensors

    NASA Astrophysics Data System (ADS)

    Plarre, Kurt; Kumar, P. R.

    2007-12-01

    We study the problem of object tracking using highly directional sensors—sensors whose field of vision is a line or a line segment. A network of such sensors monitors a certain region of the plane. Sporadically, objects moving in straight lines and at a constant speed cross the region. A sensor detects an object when it crosses its line of sight, and records the time of the detection. No distance or angle measurements are available. The task of the sensors is to estimate the directions and speeds of the objects, and the sensor lines, which are unknown a priori. This estimation problem involves the minimization of a highly nonconvex cost function. To overcome this difficulty, we introduce an algorithm, which we call "adaptive basis algorithm." This algorithm is divided into three phases: in the first phase, the algorithm is initialized using data from six sensors and four objects; in the second phase, the estimates are updated as data from more sensors and objects are incorporated. The third phase is an optional coordinated transformation. The estimation is done in an "ad-hoc" coordinate system, which we call "adaptive coordinate system." When more information is available, for example, the location of six sensors, the estimates can be transformed to the "real-world" coordinate system. This constitutes the third phase.

  1. Threshold-selecting strategy for best possible ground state detection with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Lässig, Jörg; Hoffmann, Karl Heinz

    2009-04-01

    Genetic algorithms are a standard heuristic to find states of low energy in complex state spaces as given by physical systems such as spin glasses but also in combinatorial optimization. The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover. Many schemes have been considered in literature as possible crossover selection strategies. We show for a large class of quality measures that the best possible probability distribution for selecting individuals in each generation of the algorithm execution is a rectangular distribution over the individuals sorted by their energy values. This means uniform probabilities have to be assigned to a group of the individuals with lowest energy in the population but probabilities equal to zero to individuals which are corresponding to energy values higher than a fixed cutoff, which is equal to a certain rank in the vector sorted by the energy of the states in the current population. The considered strategy is dubbed threshold selecting. The proof applies basic arguments of Markov chains and linear optimization and makes only a few assumptions on the underlying principles and hence applies to a large class of algorithms.

  2. Signal processing techniques for damage detection with piezoelectric wafer active sensors and embedded ultrasonic structural radar

    NASA Astrophysics Data System (ADS)

    Yu, Lingyu; Bao, Jingjing; Giurgiutiu, Victor

    2004-07-01

    Embedded ultrasonic structural radar (EUSR) algorithm is developed for using piezoelectric wafer active sensor (PWAS) array to detect defects within a large area of a thin-plate specimen. Signal processing techniques are used to extract the time of flight of the wave packages, and thereby to determine the location of the defects with the EUSR algorithm. In our research, the transient tone-burst wave propagation signals are generated and collected by the embedded PWAS. Then, with signal processing, the frequency contents of the signals and the time of flight of individual frequencies are determined. This paper starts with an introduction of embedded ultrasonic structural radar algorithm. Then we will describe the signal processing methods used to extract the time of flight of the wave packages. The signal processing methods being used include the wavelet denoising, the cross correlation, and Hilbert transform. Though hardware device can provide averaging function to eliminate the noise coming from the signal collection process, wavelet denoising is included to ensure better signal quality for the application in real severe environment. For better recognition of time of flight, cross correlation method is used. Hilbert transform is applied to the signals after cross correlation in order to extract the envelope of the signals. Signal processing and EUSR are both implemented by developing a graphical user-friendly interface program in LabView. We conclude with a description of our vision for applying EUSR signal analysis to structural health monitoring and embedded nondestructive evaluation. To this end, we envisage an automatic damage detection application utilizing embedded PWAS, EUSR, and advanced signal processing.

  3. Biochemia Medica has started using the CrossCheck plagiarism detection software powered by iThenticate

    PubMed Central

    Šupak-Smolčić, Vesna; Šimundić, Ana-Maria

    2013-01-01

    In February 2013, Biochemia Medica has joined CrossRef, which enabled us to implement CrossCheck plagiarism detection service. Therefore, all manuscript submitted to Biochemia Medica are now first assigned to Research integrity editor (RIE), before sending the manuscript for peer-review. RIE submits the text to CrossCheck analysis and is responsible for reviewing the results of the text similarity analysis. Based on the CrossCheck analysis results, RIE subsequently provides a recommendation to the Editor-in-chief (EIC) on whether the manuscript should be forwarded to peer-review, corrected for suspected parts prior to peer-review or immediately rejected. Final decision on the manuscript is, however, with the EIC. We hope that our new policy and manuscript processing algorithm will help us to further increase the overall quality of our Journal. PMID:23894858

  4. Small-aperture seismic array data processing using a representation of seismograms at zero-crossing points

    NASA Astrophysics Data System (ADS)

    Brokešová, Johana; Málek, Jiří

    2018-07-01

    A new method for representing seismograms by using zero-crossing points is described. This method is based on decomposing a seismogram into a set of quasi-harmonic components and, subsequently, on determining the precise zero-crossing times of these components. An analogous approach can be applied to determine extreme points that represent the zero-crossings of the first time derivative of the quasi-harmonics. Such zero-crossing and/or extreme point seismogram representation can be used successfully to reconstruct single-station seismograms, but the main application is to small-aperture array data analysis to which standard methods cannot be applied. The precise times of the zero-crossing and/or extreme points make it possible to determine precise time differences across the array used to retrieve the parameters of a plane wave propagating across the array, namely, its backazimuth and apparent phase velocity along the Earth's surface. The applicability of this method is demonstrated using two synthetic examples. In the real-data example from the Příbram-Háje array in central Bohemia (Czech Republic) for the Mw 6.4 Crete earthquake of October 12, 2013, this method is used to determine the phase velocity dispersion of both Rayleigh and Love waves. The resulting phase velocities are compared with those obtained by employing the seismic plane-wave rotation-to-translation relations. In this approach, the phase velocity is calculated by obtaining the amplitude ratios between the rotation and translation components. Seismic rotations are derived from the array data, for which the small aperture is not only an advantage but also an applicability condition.

  5. Application of Novel Software Algorithms to Spectral-Domain Optical Coherence Tomography for Automated Detection of Diabetic Retinopathy.

    PubMed

    Adhi, Mehreen; Semy, Salim K; Stein, David W; Potter, Daniel M; Kuklinski, Walter S; Sleeper, Harry A; Duker, Jay S; Waheed, Nadia K

    2016-05-01

    To present novel software algorithms applied to spectral-domain optical coherence tomography (SD-OCT) for automated detection of diabetic retinopathy (DR). Thirty-one diabetic patients (44 eyes) and 18 healthy, nondiabetic controls (20 eyes) who underwent volumetric SD-OCT imaging and fundus photography were retrospectively identified. A retina specialist independently graded DR stage. Trained automated software generated a retinal thickness score signifying macular edema and a cluster score signifying microaneurysms and/or hard exudates for each volumetric SD-OCT. Of 44 diabetic eyes, 38 had DR and six eyes did not have DR. Leave-one-out cross-validation using a linear discriminant at missed detection/false alarm ratio of 3.00 computed software sensitivity and specificity of 92% and 69%, respectively, for DR detection when compared to clinical assessment. Novel software algorithms applied to commercially available SD-OCT can successfully detect DR and may have potential as a viable screening tool for DR in future. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:410-417.]. Copyright 2016, SLACK Incorporated.

  6. Tracking problem for electromechanical system under influence of external perturbations

    NASA Astrophysics Data System (ADS)

    Kochetkov, Sergey A.; Krasnova, Svetlana A.; Utkin, Victor A.

    2017-01-01

    For electromechanical objects the new control algorithms (vortex algprithms) are developed on the base of discontinuous functions. The distinctive feature of these algorithms is providing of asymptotical convergence of the output variables to zero under influence of unknown bounded disturbances of prescribed class. The advantages of proposed approach is demonstrated for direct current motor with permanent excitation. It is shown that inner variables of the system converge to unknown bounded disturbances and guarantee asymptotical convergence of output variables to zero.

  7. Innovative Acoustic Sensor Technologies for Leak Detection in Challenging Pipe Types

    DTIC Science & Technology

    2016-12-30

    consuming field surveys using sounders (listening sticks) that relied heavily upon operator skill or noise correlators that were tuned for finding leaks...installation and setup cost • Annual service fee Periodic Inspection Deployed in a “lift and shift” survey using acoustic cross- correlation ...the correlator , a zero reading is displayed and one of the sensors can be placed to evaluate the next pipe segment in the field survey . Table 2

  8. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field.

    PubMed

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-09-09

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called "virtual sensor"), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth's magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patrick, C. E.; Aliaga, L.; Bashyal, A.

    We present double-differential measurements of antineutrino charged-current quasielastic scattering in the MINERvA detector. This study improves on a previous single-differential measurement by using updated reconstruction algorithms and interaction models and provides a complete description of observed muon kinematics in the form of a double-differential cross section with respect to muon transverse and longitudinal momentum. We also include in our signal definition, zero-meson final states arising from multinucleon interactions and from resonant pion production followed by pion absorption in the primary nucleus. We find that model agreement is considerably improved by a model tuned to MINERvA inclusive neutrino scattering data thatmore » incorporates nuclear effects such as weak nuclear screening and two-particle, two-hole enhancements.« less

  10. Deep 3D convolution neural network for CT brain hemorrhage classification

    NASA Astrophysics Data System (ADS)

    Jnawali, Kamal; Arbabshirani, Mohammad R.; Rao, Navalgund; Patel, Alpen A.

    2018-02-01

    Intracranial hemorrhage is a critical conditional with the high mortality rate that is typically diagnosed based on head computer tomography (CT) images. Deep learning algorithms, in particular, convolution neural networks (CNN), are becoming the methodology of choice in medical image analysis for a variety of applications such as computer-aided diagnosis, and segmentation. In this study, we propose a fully automated deep learning framework which learns to detect brain hemorrhage based on cross sectional CT images. The dataset for this work consists of 40,367 3D head CT studies (over 1.5 million 2D images) acquired retrospectively over a decade from multiple radiology facilities at Geisinger Health System. The proposed algorithm first extracts features using 3D CNN and then detects brain hemorrhage using the logistic function as the last layer of the network. Finally, we created an ensemble of three different 3D CNN architectures to improve the classification accuracy. The area under the curve (AUC) of the receiver operator characteristic (ROC) curve of the ensemble of three architectures was 0.87. Their results are very promising considering the fact that the head CT studies were not controlled for slice thickness, scanner type, study protocol or any other settings. Moreover, the proposed algorithm reliably detected various types of hemorrhage within the skull. This work is one of the first applications of 3D CNN trained on a large dataset of cross sectional medical images for detection of a critical radiological condition

  11. Gas ultrasonic flow rate measurement through genetic-ant colony optimization based on the ultrasonic pulse received signal model

    NASA Astrophysics Data System (ADS)

    Hou, Huirang; Zheng, Dandan; Nie, Laixiao

    2015-04-01

    For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.

  12. Algorithms for optimizing cross-overs in DNA shuffling.

    PubMed

    He, Lu; Friedman, Alan M; Bailey-Kellogg, Chris

    2012-03-21

    DNA shuffling generates combinatorial libraries of chimeric genes by stochastically recombining parent genes. The resulting libraries are subjected to large-scale genetic selection or screening to identify those chimeras with favorable properties (e.g., enhanced stability or enzymatic activity). While DNA shuffling has been applied quite successfully, it is limited by its homology-dependent, stochastic nature. Consequently, it is used only with parents of sufficient overall sequence identity, and provides no control over the resulting chimeric library. This paper presents efficient methods to extend the scope of DNA shuffling to handle significantly more diverse parents and to generate more predictable, optimized libraries. Our CODNS (cross-over optimization for DNA shuffling) approach employs polynomial-time dynamic programming algorithms to select codons for the parental amino acids, allowing for zero or a fixed number of conservative substitutions. We first present efficient algorithms to optimize the local sequence identity or the nearest-neighbor approximation of the change in free energy upon annealing, objectives that were previously optimized by computationally-expensive integer programming methods. We then present efficient algorithms for more powerful objectives that seek to localize and enhance the frequency of recombination by producing "runs" of common nucleotides either overall or according to the sequence diversity of the resulting chimeras. We demonstrate the effectiveness of CODNS in choosing codons and allocating substitutions to promote recombination between parents targeted in earlier studies: two GAR transformylases (41% amino acid sequence identity), two very distantly related DNA polymerases, Pol X and β (15%), and beta-lactamases of varying identity (26-47%). Our methods provide the protein engineer with a new approach to DNA shuffling that supports substantially more diverse parents, is more deterministic, and generates more predictable and more diverse chimeric libraries.

  13. Digitized Spiral Drawing: A Possible Biomarker for Early Parkinson's Disease.

    PubMed

    San Luciano, Marta; Wang, Cuiling; Ortega, Roberto A; Yu, Qiping; Boschung, Sarah; Soto-Valencia, Jeannie; Bressman, Susan B; Lipton, Richard B; Pullman, Seth; Saunders-Pullman, Rachel

    2016-01-01

    Pre-clinical markers of Parkinson's Disease (PD) are needed, and to be relevant in pre-clinical disease, they should be quantifiably abnormal in early disease as well. Handwriting is impaired early in PD and can be evaluated using computerized analysis of drawn spirals, capturing kinematic, dynamic, and spatial abnormalities and calculating indices that quantify motor performance and disability. Digitized spiral drawing correlates with motor scores and may be more sensitive in detecting early changes than subjective ratings. However, whether changes in spiral drawing are abnormal compared with controls and whether changes are detected in early PD are unknown. 138 PD subjects (50 with early PD) and 150 controls drew spirals on a digitizing tablet, generating x, y, z (pressure) data-coordinates and time. Derived indices corresponded to overall spiral execution (severity), shape and kinematic irregularity (second order smoothness, first order zero-crossing), tightness, mean speed and variability of spiral width. Linear mixed effect adjusted models comparing these indices and cross-validation were performed. Receiver operating characteristic analysis was applied to examine discriminative validity of combined indices. All indices were significantly different between PD cases and controls, except for zero-crossing. A model using all indices had high discriminative validity (sensitivity = 0.86, specificity = 0.81). Discriminative validity was maintained in patients with early PD. Spiral analysis accurately discriminates subjects with PD and early PD from controls supporting a role as a promising quantitative biomarker. Further assessment is needed to determine whether spiral changes are PD specific compared with other disorders and if present in pre-clinical PD.

  14. Digitized Spiral Drawing: A Possible Biomarker for Early Parkinson’s Disease

    PubMed Central

    San Luciano, Marta; Wang, Cuiling; Ortega, Roberto A.; Yu, Qiping; Boschung, Sarah; Soto-Valencia, Jeannie; Bressman, Susan B.; Lipton, Richard B.; Pullman, Seth; Saunders-Pullman, Rachel

    2016-01-01

    Introduction Pre-clinical markers of Parkinson’s Disease (PD) are needed, and to be relevant in pre-clinical disease, they should be quantifiably abnormal in early disease as well. Handwriting is impaired early in PD and can be evaluated using computerized analysis of drawn spirals, capturing kinematic, dynamic, and spatial abnormalities and calculating indices that quantify motor performance and disability. Digitized spiral drawing correlates with motor scores and may be more sensitive in detecting early changes than subjective ratings. However, whether changes in spiral drawing are abnormal compared with controls and whether changes are detected in early PD are unknown. Methods 138 PD subjects (50 with early PD) and 150 controls drew spirals on a digitizing tablet, generating x, y, z (pressure) data-coordinates and time. Derived indices corresponded to overall spiral execution (severity), shape and kinematic irregularity (second order smoothness, first order zero-crossing), tightness, mean speed and variability of spiral width. Linear mixed effect adjusted models comparing these indices and cross-validation were performed. Receiver operating characteristic analysis was applied to examine discriminative validity of combined indices. Results All indices were significantly different between PD cases and controls, except for zero-crossing. A model using all indices had high discriminative validity (sensitivity = 0.86, specificity = 0.81). Discriminative validity was maintained in patients with early PD. Conclusion Spiral analysis accurately discriminates subjects with PD and early PD from controls supporting a role as a promising quantitative biomarker. Further assessment is needed to determine whether spiral changes are PD specific compared with other disorders and if present in pre-clinical PD. PMID:27732597

  15. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany.

    PubMed

    Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun

    2015-09-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Recursive SVM biomarker selection for early detection of breast cancer in peripheral blood.

    PubMed

    Zhang, Fan; Kaufman, Howard L; Deng, Youping; Drabier, Renee

    2013-01-01

    Breast cancer is worldwide the second most common type of cancer after lung cancer. Traditional mammography and Tissue Microarray has been studied for early cancer detection and cancer prediction. However, there is a need for more reliable diagnostic tools for early detection of breast cancer. This can be a challenge due to a number of factors and logistics. First, obtaining tissue biopsies can be difficult. Second, mammography may not detect small tumors, and is often unsatisfactory for younger women who typically have dense breast tissue. Lastly, breast cancer is not a single homogeneous disease but consists of multiple disease states, each arising from a distinct molecular mechanism and having a distinct clinical progression path which makes the disease difficult to detect and predict in early stages. In the paper, we present a Support Vector Machine based on Recursive Feature Elimination and Cross Validation (SVM-RFE-CV) algorithm for early detection of breast cancer in peripheral blood and show how to use SVM-RFE-CV to model the classification and prediction problem of early detection of breast cancer in peripheral blood.The training set which consists of 32 health and 33 cancer samples and the testing set consisting of 31 health and 34 cancer samples were randomly separated from a dataset of peripheral blood of breast cancer that is downloaded from Gene Express Omnibus. First, we identified the 42 differentially expressed biomarkers between "normal" and "cancer". Then, with the SVM-RFE-CV we extracted 15 biomarkers that yield zero cross validation score. Lastly, we compared the classification and prediction performance of SVM-RFE-CV with that of SVM and SVM Recursive Feature Elimination (SVM-RFE). We found that 1) the SVM-RFE-CV is suitable for analyzing noisy high-throughput microarray data, 2) it outperforms SVM-RFE in the robustness to noise and in the ability to recover informative features, and 3) it can improve the prediction performance (Area Under Curve) in the testing data set from 0.5826 to 0.7879. Further pathway analysis showed that the biomarkers are associated with Signaling, Hemostasis, Hormones, and Immune System, which are consistent with previous findings. Our prediction model can serve as a general model for biomarker discovery in early detection of other cancers. In the future, Polymerase Chain Reaction (PCR) is planned for validation of the ability of these potential biomarkers for early detection of breast cancer.

  17. A High-Resolution Demodulation Algorithm for FBG-FP Static-Strain Sensors Based on the Hilbert Transform and Cross Third-Order Cumulant

    PubMed Central

    Huang, Wenzhu; Zhen, Tengkun; Zhang, Wentao; Zhang, Fusheng; Li, Fang

    2015-01-01

    Static strain can be detected by measuring a cross-correlation of reflection spectra from two fiber Bragg gratings (FBGs). However, the static-strain measurement resolution is limited by the dominant Gaussian noise source when using this traditional method. This paper presents a novel static-strain demodulation algorithm for FBG-based Fabry-Perot interferometers (FBG-FPs). The Hilbert transform is proposed for changing the Gaussian distribution of the two FBG-FPs’ reflection spectra, and a cross third-order cumulant is used to use the results of the Hilbert transform and get a group of noise-vanished signals which can be used to accurately calculate the wavelength difference of the two FBG-FPs. The benefit by these processes is that Gaussian noise in the spectra can be suppressed completely in theory and a higher resolution can be reached. In order to verify the precision and flexibility of this algorithm, a detailed theory model and a simulation analysis are given, and an experiment is implemented. As a result, a static-strain resolution of 0.9 nε under laboratory environment condition is achieved, showing a higher resolution than the traditional cross-correlation method. PMID:25923938

  18. A High-Resolution Demodulation Algorithm for FBG-FP Static-Strain Sensors Based on the Hilbert Transform and Cross Third-Order Cumulant.

    PubMed

    Huang, Wenzhu; Zhen, Tengkun; Zhang, Wentao; Zhang, Fusheng; Li, Fang

    2015-04-27

    Static strain can be detected by measuring a cross-correlation of reflection spectra from two fiber Bragg gratings (FBGs). However, the static-strain measurement resolution is limited by the dominant Gaussian noise source when using this traditional method. This paper presents a novel static-strain demodulation algorithm for FBG-based Fabry-Perot interferometers (FBG-FPs). The Hilbert transform is proposed for changing the Gaussian distribution of the two FBG-FPs' reflection spectra, and a cross third-order cumulant is used to use the results of the Hilbert transform and get a group of noise-vanished signals which can be used to accurately calculate the wavelength difference of the two FBG-FPs. The benefit by these processes is that Gaussian noise in the spectra can be suppressed completely in theory and a higher resolution can be reached. In order to verify the precision and flexibility of this algorithm, a detailed theory model and a simulation analysis are given, and an experiment is implemented. As a result, a static-strain resolution of 0.9 nε under laboratory environment condition is achieved, showing a higher resolution than the traditional cross-correlation method.

  19. Target-type probability combining algorithms for multisensor tracking

    NASA Astrophysics Data System (ADS)

    Wigren, Torbjorn

    2001-08-01

    Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.

  20. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, Brendan C.; Ferraro, Nathaniel M.; Paz-Soldan, Carlos A.

    In order to understand the effect of rotation on the plasma's response to three-dimensional magnetic perturbations, we perform a systematic scan of the zero-crossing of the rotation profile in a DIII-D ITER-similar shape equilibrium using linear, time-independent modeling with the M3D-C1 extended magnetohydrodynamics code. We confirm that the local resonant magnetic field generally increases as the rotation decreases at a rational surface. Multiple peaks in the resonant field are observed near rational surfaces, however, and the maximum resonant field does not always correspond to zero rotation at the surface. Furthermore, we show that non-resonant current can be driven at zero-more » crossings not aligned with rational surfaces if there is sufficient shear in the rotation profile there, leading to an amplification of near-resonant Fourier harmonics of the perturbed magnetic field and a decrease in the far-off -resonant harmonics. The quasilinear electromagnetic torque induced by this non-resonant plasma response provides drive to flatten the rotation, possibly allowing for increased transport in the pedestal by the destabilization of turbulent modes. In addition, this torque acts to drive the rotation zero-crossing to dynamically stable points near rational surfaces, which would allow for increased resonant penetration. By one or both of these mechanisms, this torque may play an important role in bifurcations into ELM suppression. Finally, we discuss how these changes to the plasma response could be detected by tokamak diagnostics. In particular, we show that the changes to the resonant field discussed here have a significant impact on the external perturbed magnetic field, which should be observable by magnetic sensors on the high-field side of tokamaks, but not on the low-field side. In addition, TRIP3D-MAFOT simulations show that none of the changes to the plasma response described here substantially affects the divertor footprint structure.« less

  2. A survey of noninteractive zero knowledge proof system and its applications.

    PubMed

    Wu, Huixin; Wang, Feng

    2014-01-01

    Zero knowledge proof system which has received extensive attention since it was proposed is an important branch of cryptography and computational complexity theory. Thereinto, noninteractive zero knowledge proof system contains only one message sent by the prover to the verifier. It is widely used in the construction of various types of cryptographic protocols and cryptographic algorithms because of its good privacy, authentication, and lower interactive complexity. This paper reviews and analyzes the basic principles of noninteractive zero knowledge proof system, and summarizes the research progress achieved by noninteractive zero knowledge proof system on the following aspects: the definition and related models of noninteractive zero knowledge proof system, noninteractive zero knowledge proof system of NP problems, noninteractive statistical and perfect zero knowledge, the connection between noninteractive zero knowledge proof system, interactive zero knowledge proof system, and zap, and the specific applications of noninteractive zero knowledge proof system. This paper also points out the future research directions.

  3. On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping

    PubMed Central

    Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A.; Wang, Yi; Spincemaille, Pascal

    2016-01-01

    Purpose Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. Materials and Methods High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Results Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p < 0.001; p < 0.001), which was higher than that of post-zero padded QSM (p < 0.001; p < 0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p = 0.004; p < 0.001). Conclusion Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. PMID:27587225

  4. On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping.

    PubMed

    Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A; Wang, Yi; Spincemaille, Pascal

    2017-01-01

    Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p<0.001; p<0.001), which was higher than that of post-zero padded QSM (p<0.001; p<0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p=0.004; p<0.001). Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Semiautomated tremor detection using a combined cross-correlation and neural network approach

    USGS Publications Warehouse

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2013-01-01

    Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low‒amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross‒correlation technique, followed by a Self‒Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being “semiautomated”. We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal‒to‒noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal‒to‒noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.

  6. Automated boundary detection of the optic disc and layer segmentation of the peripapillary retina in volumetric structural and angiographic optical coherence tomography.

    PubMed

    Zang, Pengxiao; Gao, Simon S; Hwang, Thomas S; Flaxel, Christina J; Wilson, David J; Morrison, John C; Huang, David; Li, Dengwang; Jia, Yali

    2017-03-01

    To improve optic disc boundary detection and peripapillary retinal layer segmentation, we propose an automated approach for structural and angiographic optical coherence tomography. The algorithm was performed on radial cross-sectional B-scans. The disc boundary was detected by searching for the position of Bruch's membrane opening, and retinal layer boundaries were detected using a dynamic programming-based graph search algorithm on each B-scan without the disc region. A comparison of the disc boundary using our method with that determined by manual delineation showed good accuracy, with an average Dice similarity coefficient ≥0.90 in healthy eyes and eyes with diabetic retinopathy and glaucoma. The layer segmentation accuracy in the same cases was on average less than one pixel (3.13 μm).

  7. Automated boundary detection of the optic disc and layer segmentation of the peripapillary retina in volumetric structural and angiographic optical coherence tomography

    PubMed Central

    Zang, Pengxiao; Gao, Simon S.; Hwang, Thomas S.; Flaxel, Christina J.; Wilson, David J.; Morrison, John C.; Huang, David; Li, Dengwang; Jia, Yali

    2017-01-01

    To improve optic disc boundary detection and peripapillary retinal layer segmentation, we propose an automated approach for structural and angiographic optical coherence tomography. The algorithm was performed on radial cross-sectional B-scans. The disc boundary was detected by searching for the position of Bruch’s membrane opening, and retinal layer boundaries were detected using a dynamic programming-based graph search algorithm on each B-scan without the disc region. A comparison of the disc boundary using our method with that determined by manual delineation showed good accuracy, with an average Dice similarity coefficient ≥0.90 in healthy eyes and eyes with diabetic retinopathy and glaucoma. The layer segmentation accuracy in the same cases was on average less than one pixel (3.13 μm). PMID:28663830

  8. Fire detection system using random forest classification for image sequences of complex background

    NASA Astrophysics Data System (ADS)

    Kim, Onecue; Kang, Dong-Joong

    2013-06-01

    We present a fire alarm system based on image processing that detects fire accidents in various environments. To reduce false alarms that frequently appeared in earlier systems, we combined image features including color, motion, and blinking information. We specifically define the color conditions of fires in hue, saturation and value, and RGB color space. Fire features are represented as intensity variation, color mean and variance, motion, and image differences. Moreover, blinking fire features are modeled by using crossing patches. We propose an algorithm that classifies patches into fire or nonfire areas by using random forest supervised learning. We design an embedded surveillance device made with acrylonitrile butadiene styrene housing for stable fire detection in outdoor environments. The experimental results show that our algorithm works robustly in complex environments and is able to detect fires in real time.

  9. Automated circumferential construction of first-order aqueous humor outflow pathways using spectral-domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Alex S.; Belghith, Akram; Dastiridou, Anna; Chopra, Vikas; Zangwill, Linda M.; Weinreb, Robert N.

    2017-06-01

    The purpose was to create a three-dimensional (3-D) model of circumferential aqueous humor outflow (AHO) in a living human eye with an automated detection algorithm for Schlemm's canal (SC) and first-order collector channels (CC) applied to spectral-domain optical coherence tomography (SD-OCT). Anterior segment SD-OCT scans from a subject were acquired circumferentially around the limbus. A Bayesian Ridge method was used to approximate the location of the SC on infrared confocal laser scanning ophthalmoscopic images with a cross multiplication tool developed to initiate SC/CC detection automated through a fuzzy hidden Markov Chain approach. Automatic segmentation of SC and initial CC's was manually confirmed by two masked graders. Outflow pathways detected by the segmentation algorithm were reconstructed into a 3-D representation of AHO. Overall, only <1% of images (5114 total B-scans) were ungradable. Automatic segmentation algorithm performed well with SC detection 98.3% of the time and <0.1% false positive detection compared to expert grader consensus. CC was detected 84.2% of the time with 1.4% false positive detection. 3-D representation of AHO pathways demonstrated variably thicker and thinner SC with some clear CC roots. Circumferential (360 deg), automated, and validated AHO detection of angle structures in the living human eye with reconstruction was possible.

  10. Development and Long-Term Verification of Stereo Vision Sensor System for Controlling Safety at Railroad Crossing

    NASA Astrophysics Data System (ADS)

    Hosotani, Daisuke; Yoda, Ikushi; Hishiyama, Yoshiyuki; Sakaue, Katsuhiko

    Many people are involved in accidents every year at railroad crossings, but there is no suitable sensor for detecting pedestrians. We are therefore developing a ubiquitous stereo vision based system for ensuring safety at railroad crossings. In this system, stereo cameras are installed at the corners and are pointed toward the center of the railroad crossing to monitor the passage of people. The system determines automatically and in real-time whether anyone or anything is inside the railroad crossing, and whether anyone remains in the crossing. The system can be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble. We have developed an original stereovision device and installed the remote controlled experimental system applied human detection algorithm in the commercial railroad crossing. Then we store and analyze image data and tracking data throughout two years for standardization of system requirement specification.

  11. Improved Resolution and Reduced Clutter in Ultra-Wideband Microwave Imaging Using Cross-Correlated Back Projection: Experimental and Numerical Results

    PubMed Central

    Jacobsen, S.; Birkelund, Y.

    2010-01-01

    Microwave breast cancer detection is based on the dielectric contrast between healthy and malignant tissue. This radar-based imaging method involves illumination of the breast with an ultra-wideband pulse. Detection of tumors within the breast is achieved by some selected focusing technique. Image formation algorithms are tailored to enhance tumor responses and reduce early-time and late-time clutter associated with skin reflections and heterogeneity of breast tissue. In this contribution, we evaluate the performance of the so-called cross-correlated back projection imaging scheme by using a scanning system in phantom experiments. Supplementary numerical modeling based on commercial software is also presented. The phantom is synthetically scanned with a broadband elliptical antenna in a mono-static configuration. The respective signals are pre-processed by a data-adaptive RLS algorithm in order to remove artifacts caused by antenna reverberations and signal clutter. Successful detection of a 7 mm diameter cylindrical tumor immersed in a low permittivity medium was achieved in all cases. Selecting the widely used delay-and-sum (DAS) beamforming algorithm as a benchmark, we show that correlation based imaging methods improve the signal-to-clutter ratio by at least 10 dB and improves spatial resolution through a reduction of the imaged peak full-width half maximum (FWHM) of about 40–50%. PMID:21331362

  12. Improved resolution and reduced clutter in ultra-wideband microwave imaging using cross-correlated back projection: experimental and numerical results.

    PubMed

    Jacobsen, S; Birkelund, Y

    2010-01-01

    Microwave breast cancer detection is based on the dielectric contrast between healthy and malignant tissue. This radar-based imaging method involves illumination of the breast with an ultra-wideband pulse. Detection of tumors within the breast is achieved by some selected focusing technique. Image formation algorithms are tailored to enhance tumor responses and reduce early-time and late-time clutter associated with skin reflections and heterogeneity of breast tissue. In this contribution, we evaluate the performance of the so-called cross-correlated back projection imaging scheme by using a scanning system in phantom experiments. Supplementary numerical modeling based on commercial software is also presented. The phantom is synthetically scanned with a broadband elliptical antenna in a mono-static configuration. The respective signals are pre-processed by a data-adaptive RLS algorithm in order to remove artifacts caused by antenna reverberations and signal clutter. Successful detection of a 7 mm diameter cylindrical tumor immersed in a low permittivity medium was achieved in all cases. Selecting the widely used delay-and-sum (DAS) beamforming algorithm as a benchmark, we show that correlation based imaging methods improve the signal-to-clutter ratio by at least 10 dB and improves spatial resolution through a reduction of the imaged peak full-width half maximum (FWHM) of about 40-50%.

  13. A binary search approach to whole-genome data analysis.

    PubMed

    Brodsky, Leonid; Kogan, Simon; Benjacob, Eshel; Nevo, Eviatar

    2010-09-28

    A sequence analysis-oriented binary search-like algorithm was transformed to a sensitive and accurate analysis tool for processing whole-genome data. The advantage of the algorithm over previous methods is its ability to detect the margins of both short and long genome fragments, enriched by up-regulated signals, at equal accuracy. The score of an enriched genome fragment reflects the difference between the actual concentration of up-regulated signals in the fragment and the chromosome signal baseline. The "divide-and-conquer"-type algorithm detects a series of nonintersecting fragments of various lengths with locally optimal scores. The procedure is applied to detected fragments in a nested manner by recalculating the lower-than-baseline signals in the chromosome. The algorithm was applied to simulated whole-genome data, and its sensitivity/specificity were compared with those of several alternative algorithms. The algorithm was also tested with four biological tiling array datasets comprising Arabidopsis (i) expression and (ii) histone 3 lysine 27 trimethylation CHIP-on-chip datasets; Saccharomyces cerevisiae (iii) spliced intron data and (iv) chromatin remodeling factor binding sites. The analyses' results demonstrate the power of the algorithm in identifying both the short up-regulated fragments (such as exons and transcription factor binding sites) and the long--even moderately up-regulated zones--at their precise genome margins. The algorithm generates an accurate whole-genome landscape that could be used for cross-comparison of signals across the same genome in evolutionary and general genomic studies.

  14. Detecting PM2.5's Correlations between Neighboring Cities Using a Time-Lagged Cross-Correlation Coefficient.

    PubMed

    Wang, Fang; Wang, Lin; Chen, Yuming

    2017-08-31

    In order to investigate the time-dependent cross-correlations of fine particulate (PM2.5) series among neighboring cities in Northern China, in this paper, we propose a new cross-correlation coefficient, the time-lagged q-L dependent height crosscorrelation coefficient (denoted by p q (τ, L)), which incorporates the time-lag factor and the fluctuation amplitude information into the analogous height cross-correlation analysis coefficient. Numerical tests are performed to illustrate that the newly proposed coefficient ρ q (τ, L) can be used to detect cross-correlations between two series with time lags and to identify different range of fluctuations at which two series possess cross-correlations. Applying the new coefficient to analyze the time-dependent cross-correlations of PM2.5 series between Beijing and the three neighboring cities of Tianjin, Zhangjiakou, and Baoding, we find that time lags between the PM2.5 series with larger fluctuations are longer than those between PM2.5 series withsmaller fluctuations. Our analysis also shows that cross-correlations between the PM2.5 series of two neighboring cities are significant and the time lags between two PM2.5 series of neighboring cities are significantly non-zero. These findings providenew scientific support on the view that air pollution in neighboring cities can affect one another not simultaneously but with a time lag.

  15. Developing and evaluating a mobile driver fatigue detection network based on electroencephalograph signals

    PubMed Central

    Yin, Jinghai; Mu, Zhendong

    2016-01-01

    The rapid development of driver fatigue detection technology indicates important significance of traffic safety. The authors’ main goals of this Letter are principally three: (i) A middleware architecture, defined as process unit (PU), which can communicate with personal electroencephalography (EEG) node (PEN) and cloud server (CS). The PU receives EEG signals from PEN, recognises the fatigue state of the driver, and transfer this information to CS. The CS sends notification messages to the surrounding vehicles. (ii) An android application for fatigue detection is built. The application can be used for the driver to detect the state of his/her fatigue based on EEG signals, and warn neighbourhood vehicles. (iii) The detection algorithm for driver fatigue is applied based on fuzzy entropy. The idea of 10-fold cross-validation and support vector machine are used for classified calculation. Experimental results show that the average accurate rate of detecting driver fatigue is about 95%, which implying that the algorithm is validity in detecting state of driver fatigue. PMID:28529761

  16. Developing and evaluating a mobile driver fatigue detection network based on electroencephalograph signals.

    PubMed

    Yin, Jinghai; Hu, Jianfeng; Mu, Zhendong

    2017-02-01

    The rapid development of driver fatigue detection technology indicates important significance of traffic safety. The authors' main goals of this Letter are principally three: (i) A middleware architecture, defined as process unit (PU), which can communicate with personal electroencephalography (EEG) node (PEN) and cloud server (CS). The PU receives EEG signals from PEN, recognises the fatigue state of the driver, and transfer this information to CS. The CS sends notification messages to the surrounding vehicles. (ii) An android application for fatigue detection is built. The application can be used for the driver to detect the state of his/her fatigue based on EEG signals, and warn neighbourhood vehicles. (iii) The detection algorithm for driver fatigue is applied based on fuzzy entropy. The idea of 10-fold cross-validation and support vector machine are used for classified calculation. Experimental results show that the average accurate rate of detecting driver fatigue is about 95%, which implying that the algorithm is validity in detecting state of driver fatigue.

  17. Light-Actuated Micromechanical Relays for Zero-Power Infrared Detection

    DTIC Science & Technology

    2017-03-01

    Light-Actuated Micromechanical Relays for Zero-Power Infrared Detection Zhenyun Qian, Sungho Kang, Vageeswar Rajaram, Cristian Cassella, Nicol E...near-zero power infrared (IR) detection . Differently from any existing switching element, the proposed LMR relies on a plasmonically-enhanced...chip enabling the monolithic fabrication of multiple LMRs connected together to form a logic topology suitable for the detection of specific

  18. Bayesian longitudinal segmentation of hippocampal substructures in brain MRI using subject-specific atlases

    PubMed Central

    Iglesias, Juan Eugenio; Van Leemput, Koen; Augustinack, Jean; Insausti, Ricardo; Fischl, Bruce; Reuter, Martin

    2016-01-01

    The hippocampal formation is a complex, heterogeneous structure that consists of a number of distinct, interacting subregions. Atrophy of these subregions is implied in a variety of neurodegenerative diseases, most prominently in Alzheimer’s disease (AD). Thanks to the increasing resolution of MR images and computational atlases, automatic segmentation of hippocampal subregions is becoming feasible in MRI scans. Here we introduce a generative model for dedicated longitudinal segmentation that relies on subject-specific atlases. The segmentations of the scans at the different time points are jointly computed using Bayesian inference. All time points are treated the same to avoid processing bias. We evaluate this approach using over 4,700 scans from two publicly available datasets (ADNI and MIRIAD). In test-retest reliability experiments, the proposed method yielded significantly lower volume differences and significantly higher Dice overlaps than the cross-sectional approach for nearly every subregion (average across subregions: 4.5% vs. 6.5%, Dice overlap: 81.8% vs. 75.4%). The longitudinal algorithm also demonstrated increased sensitivity to group differences: in MIRIAD (69 subjects: 46 with AD and 23 controls), it found differences in atrophy rates between AD and controls that the cross sectional method could not detect in a number of subregions: right parasubiculum, left and right presubiculum, right subiculum, left dentate gyrus, left CA4, left HATA and right tail. In ADNI (836 subjects: 369 with AD, 215 with early cognitive impairment – eMCI – and 252 controls), all methods found significant differences between AD and controls, but the proposed longitudinal algorithm detected differences between controls and eMCI and differences between eMCI and AD that the cross sectional method could not find: left presubiculum, right subiculum, left and right parasubiculum, left and right HATA. Moreover, many of the differences that the cross-sectional method already found were detected with higher significance. The presented algorithm will be made available as part of the open-source neuroimaging package FreeSurfer. PMID:27426838

  19. Characterization of preferential flow paths between boreholes in fractured rock using a nanoscale zero-valent iron tracer test

    NASA Astrophysics Data System (ADS)

    Chuang, Po-Yu; Chia, Yeeping; Liou, Ya-Hsuan; Teng, Mao-Hua; Liu, Ching-Yi; Lee, Tsai-Ping

    2016-11-01

    Recent advances in borehole geophysical techniques have improved characterization of cross-hole fracture flow. The direct detection of preferential flow paths in fractured rock, however, remains to be resolved. In this study, a novel approach using nanoscale zero-valent iron (nZVI or `nano-iron') as a tracer was developed for detecting fracture flow paths directly. Generally, only a few rock fractures are permeable while most are much less permeable. A heat-pulse flowmeter can be used to detect changes in flow velocity for delineating permeable fracture zones in the borehole and providing the design basis for the tracer test. When nano-iron particles are released in an injection well, they can migrate through the connecting permeable fracture and be attracted to a magnet array when arriving in an observation well. Such an attraction of incoming iron nanoparticles by the magnet can provide quantitative information for locating the position of the tracer inlet. A series of field experiments were conducted in two wells in fractured rock at a hydrogeological research station in Taiwan, to test the cross-hole migration of the nano-iron tracer through permeable connected fractures. The fluid conductivity recorded in the observation well confirmed the arrival of the injected nano-iron slurry. All of the iron nanoparticles attracted to the magnet array in the observation well were found at the depth of a permeable fracture zone delineated by the flowmeter. This study has demonstrated that integrating the nano-iron tracer test with flowmeter measurement has the potential to characterize preferential flow paths in fractured rock.

  20. Simple immunoassay for detection of PCBs in transformer oil.

    PubMed

    Glass, Thomas R; Ohmura, Naoya; Taemi, Yukihiro; Joh, Takashi

    2005-07-01

    A rapid and inexpensive procedure to detect polychlorinated biphenyls (PCBs) in transformer oil is needed to facilitate identification and removal of PCB contaminated transformers. Here we describe a simple two-step liquid-liquid extraction using acidic dimethyl sulfoxide in conjunction with an immunoassay for detecting PCBs in transformer oil. The process described is faster and simpler than any previous immunoassay while maintaining comparable detection limit and false negative rate. Cross reactivity data, characterizing the immunoassay response to the four Kanechlor technical mixtures of PCBs in oil, are presented. Forty-five used transformer oil samples were analyzed by gas chromatography-high-resolution mass spectrometry and were also evaluated using the immunoassay protocol developed. Results presented show zero false negatives at a 1.4 ppm nominal cutoff for the transformer oils analyzed.

  1. Automated Detection of Atrial Fibrillation Based on Time-Frequency Analysis of Seismocardiograms.

    PubMed

    Hurnanen, Tero; Lehtonen, Eero; Tadi, Mojtaba Jafari; Kuusela, Tom; Kiviniemi, Tuomas; Saraste, Antti; Vasankari, Tuija; Airaksinen, Juhani; Koivisto, Tero; Pankaala, Mikko

    2017-09-01

    In this paper, a novel method to detect atrial fibrillation (AFib) from a seismocardiogram (SCG) is presented. The proposed method is based on linear classification of the spectral entropy and a heart rate variability index computed from the SCG. The performance of the developed algorithm is demonstrated on data gathered from 13 patients in clinical setting. After motion artifact removal, in total 119 min of AFib data and 126 min of sinus rhythm data were considered for automated AFib detection. No other arrhythmias were considered in this study. The proposed algorithm requires no direct heartbeat peak detection from the SCG data, which makes it tolerant against interpersonal variations in the SCG morphology, and noise. Furthermore, the proposed method relies solely on the SCG and needs no complementary electrocardiography to be functional. For the considered data, the detection method performs well even on relatively low quality SCG signals. Using a majority voting scheme that takes five randomly selected segments from a signal and classifies these segments using the proposed algorithm, we obtained an average true positive rate of [Formula: see text] and an average true negative rate of [Formula: see text] for detecting AFib in leave-one-out cross-validation. This paper facilitates adoption of microelectromechanical sensor based heart monitoring devices for arrhythmia detection.

  2. Algorithm for fuel conservative horizontal capture trajectories

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Erzberger, H.

    1981-01-01

    A real time algorithm for computing constant altitude fuel-conservative approach trajectories for aircraft is described. The characteristics of the trajectory computed were chosen to approximate the extremal trajectories obtained from the optimal control solution to the problem and showed a fuel difference of only 0.5 to 2 percent for the real time algorithm in favor of the extremals. The trajectories may start at any initial position, heading, and speed and end at any other final position, heading, and speed. They consist of straight lines and a series of circular arcs of varying radius to approximate constant bank-angle decelerating turns. Throttle control is maximum thrust, nominal thrust, or zero thrust. Bank-angle control is either zero or aproximately 30 deg.

  3. The BMPix and PEAK Tools: New Methods for Automated Laminae Recognition and Counting - Application to Glacial Varves From Antarctic Marine Sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.

    2009-12-01

    We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  4. Decentralized indirect methods for learning automata games.

    PubMed

    Tilak, Omkar; Martin, Ryan; Mukhopadhyay, Snehasis

    2011-10-01

    We discuss the application of indirect learning methods in zero-sum and identical payoff learning automata games. We propose a novel decentralized version of the well-known pursuit learning algorithm. Such a decentralized algorithm has significant computational advantages over its centralized counterpart. The theoretical study of such a decentralized algorithm requires the analysis to be carried out in a nonstationary environment. We use a novel bootstrapping argument to prove the convergence of the algorithm. To our knowledge, this is the first time that such analysis has been carried out for zero-sum and identical payoff games. Extensive simulation studies are reported, which demonstrate the proposed algorithm's fast and accurate convergence in a variety of game scenarios. We also introduce the framework of partial communication in the context of identical payoff games of learning automata. In such games, the automata may not communicate with each other or may communicate selectively. This comprehensive framework has the capability to model both centralized and decentralized games discussed in this paper.

  5. Real-time implementation of electromyogram pattern recognition as a control command of man-machine interface.

    PubMed

    Chang, G C; Kang, W J; Luh, J J; Cheng, C K; Lai, J S; Chen, J J; Kuo, T S

    1996-10-01

    The purpose of this study was to develop a real-time electromyogram (EMG) discrimination system to provide control commands for man-machine interface applications. A host computer with a plug-in data acquisition and processing board containing a TMS320 C31 floating-point digital signal processor was used to attain real-time EMG classification. Two-channel EMG signals were collected by two pairs of surface electrodes located bilaterally between the sternocleidomastoid and the upper trapezius. Five motions of the neck and shoulders were discriminated for each subject. The zero-crossing rate was employed to detect the onset of muscle contraction. The cepstral coefficients, derived from autoregressive coefficients and estimated by a recursive least square algorithm, were used as the recognition features. These features were then discriminated using a modified maximum likelihood distance classifier. The total response time of this EMG discrimination system was achieved about within 0.17 s. Four able bodied and two C5/6 quadriplegic subjects took part in the experiment, and achieved 95% mean recognition rate in discrimination between the five specific motions. The response time and the reliability of recognition indicate that this system has the potential to discriminate body motions for man-machine interface applications.

  6. Sampling command generator corrects for noise and dropouts in recorded data

    NASA Technical Reports Server (NTRS)

    Anderson, T. O.

    1973-01-01

    Generator measures period between zero crossings of reference signal and accepts as correct timing points only those zero crossings which occur acceptably close to nominal time predicted from last accepted command. Unidirectional crossover points are used exclusively so errors from analog nonsymmetry of crossover detector are avoided.

  7. Automated detection of the retinal from OCT spectral domain images of healthy eyes

    NASA Astrophysics Data System (ADS)

    Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello

    2015-06-01

    Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retinal. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.

  8. Automated detection of retinal layers from OCT spectral-domain images of healthy eyes

    NASA Astrophysics Data System (ADS)

    Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello

    2015-12-01

    Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retina. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral-domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.

  9. Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurugol, Sila, E-mail: sila.kurugol@childrens.harvard.edu; Come, Carolyn E.; Diaz, Alejandro A.

    Purpose: The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. Methods: The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearbymore » edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. Results: The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. Conclusions: The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers.« less

  10. Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions.

    PubMed

    Kurugol, Sila; Come, Carolyn E; Diaz, Alejandro A; Ross, James C; Kinney, Greg L; Black-Shinn, Jennifer L; Hokanson, John E; Budoff, Matthew J; Washko, George R; San Jose Estepar, Raul

    2015-09-01

    The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearby edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers.

  11. Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions

    PubMed Central

    Kurugol, Sila; Come, Carolyn E.; Diaz, Alejandro A.; Ross, James C.; Kinney, Greg L.; Black-Shinn, Jennifer L.; Hokanson, John E.; Budoff, Matthew J.; Washko, George R.; San Jose Estepar, Raul

    2015-01-01

    Purpose: The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. Methods: The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearby edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. Results: The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. Conclusions: The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers. PMID:26328995

  12. Automatic processing of induced events in the geothermal reservoirs Landau and Insheim, Germany

    NASA Astrophysics Data System (ADS)

    Olbert, Kai; Küperkoch, Ludger; Meier, Thomas

    2016-04-01

    Induced events can be a risk to local infrastructure that need to be understood and evaluated. They represent also a chance to learn more about the reservoir behavior and characteristics. Prior to the analysis, the waveform data must be processed consistently and accurately to avoid erroneous interpretations. In the framework of the MAGS2 project an automatic off-line event detection and a phase onset time determination algorithm are applied to induced seismic events in geothermal systems in Landau and Insheim, Germany. The off-line detection algorithm works based on a cross-correlation of continuous data taken from the local seismic network with master events. It distinguishes events between different reservoirs and within the individual reservoirs. Furthermore, it provides a location and magnitude estimation. Data from 2007 to 2014 are processed and compared with other detections using the SeisComp3 cross correlation detector and a STA/LTA detector. The detected events are analyzed concerning spatial or temporal clustering. Furthermore the number of events are compared to the existing detection lists. The automatic phase picking algorithm combines an AR-AIC approach with a cost function to find precise P1- and S1-phase onset times which can be used for localization and tomography studies. 800 induced events are processed, determining 5000 P1- and 6000 S1-picks. The phase onset times show a high precision with mean residuals to manual phase picks of 0s (P1) to 0.04s (S1) and standard deviations below ±0.05s. The received automatic picks are applied to relocate a selected number of events to evaluate influences on the location precision.

  13. Stochastic derivative-free optimization using a trust region framework

    DOE PAGES

    Larson, Jeffrey; Billups, Stephen C.

    2016-02-17

    This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less

  14. A Computational Framework for High-Throughput Isotopic Natural Abundance Correction of Omics-Level Ultra-High Resolution FT-MS Datasets

    PubMed Central

    Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.

    2013-01-01

    New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440

  15. A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays

    NASA Astrophysics Data System (ADS)

    Guo, Y. J.; Lee, K. J.; Caballero, R. N.

    2018-04-01

    The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.

  16. Real-time envelope cross-correlation detector: application to induced seismicity in the Insheim and Landau deep geothermal reservoirs

    NASA Astrophysics Data System (ADS)

    Vasterling, Margarete; Wegler, Ulrich; Becker, Jan; Brüstle, Andrea; Bischoff, Monika

    2017-01-01

    We develop and test a real-time envelope cross-correlation detector for use in seismic response plans to mitigate hazard of induced seismicity. The incoming seismological data are cross-correlated in real-time with a set of previously recorded master events. For robustness against small changes in the earthquake source locations or in the focal mechanisms we cross-correlate the envelopes of the seismograms rather than the seismograms themselves. Two sequenced detection conditions are implemented: After passing a single trace cross-correlation condition, a network cross-correlation is calculated taking amplitude ratios between stations into account. Besides detecting the earthquake and assigning it to the respective reservoir, real-time magnitudes are important for seismic response plans. We estimate the magnitudes of induced microseismicity using the relative amplitudes between master event and detected event. The real-time detector is implemented as a SeisComP3 module. We carry out offline and online performance tests using seismic monitoring data of the Insheim and Landau geothermal power plants (Upper Rhine Graben, Germany), also including blasts from a nearby quarry. The comparison of the automatic real-time catalogue with a manually processed catalogue shows, that with the implemented parameters events are always correctly assigned to the respective reservoir (4 km distance between reservoirs) or the quarry (8 km and 10 km distance, respectively, from the reservoirs). The real-time catalogue achieves a magnitude of completeness around 0.0. Four per cent of the events assigned to the Insheim reservoir and zero per cent of the Landau events are misdetections. All wrong detections are local tectonic events, whereas none are caused by seismic noise.

  17. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process.

    PubMed

    Wilson, Lorna R M; Hopcraft, Keith I

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  18. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process

    NASA Astrophysics Data System (ADS)

    Wilson, Lorna R. M.; Hopcraft, Keith I.

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  19. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  20. Neutron-rich nuclei produced at zero degrees in damped collisions induced by a beam of 18O on a 238U target

    NASA Astrophysics Data System (ADS)

    Stefan, I.; Fornal, B.; Leoni, S.; Azaiez, F.; Portail, C.; Thomas, J. C.; Karpov, A. V.; Ackermann, D.; Bednarczyk, P.; Blumenfeld, Y.; Calinescu, S.; Chbihi, A.; Ciemala, M.; Cieplicka-Oryńczak, N.; Crespi, F. C. L.; Franchoo, S.; Hammache, F.; Iskra, Ł. W.; Jacquot, B.; Janssens, R. V. F.; Kamalou, O.; Lauritsen, T.; Lewitowicz, M.; Olivier, L.; Lukyanov, S. M.; Maccormick, M.; Maj, A.; Marini, P.; Matea, I.; Naumenko, M. A.; de Oliveira Santos, F.; Petrone, C.; Penionzhkevich, Yu. E.; Rotaru, F.; Savajols, H.; Sorlin, O.; Stanoiu, M.; Szpak, B.; Tarasov, O. B.; Verney, D.

    2018-04-01

    Cross sections and corresponding momentum distributions have been measured for the first time at zero degrees for the exotic nuclei obtained from a beam of 18O at 8.5 MeV/A impinging on a 1 mg/cm2238U target. Sizable cross sections were found for the production of exotic species arising from the neutron transfer and proton removal from the projectile. Comparisons of experimental results with calculations based on deep-inelastic reaction models, taking into account the particle evaporation process, indicate that zero degree is a scattering angle at which the differential reaction cross section for production of exotic nuclei is at its maximum. This result is important in view of the new generation of zero degrees spectrometers under construction, such as the S3 separator at GANIL, for example.

  1. Use of zerotree coding in a high-speed pyramid image multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo

    1995-03-01

    A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.

  2. Indirect learning control for nonlinear dynamical systems

    NASA Technical Reports Server (NTRS)

    Ryu, Yeong Soon; Longman, Richard W.

    1993-01-01

    In a previous paper, learning control algorithms were developed based on adaptive control ideas for linear time variant systems. The learning control methods were shown to have certain advantages over their adaptive control counterparts, such as the ability to produce zero tracking error in time varying systems, and the ability to eliminate repetitive disturbances. In recent years, certain adaptive control algorithms have been developed for multi-body dynamic systems such as robots, with global guaranteed convergence to zero tracking error for the nonlinear system euations. In this paper we study the relationship between such adaptive control methods designed for this specific class of nonlinear systems, and the learning control problem for such systems, seeking to converge to zero tracking error in following a specific command repeatedly, starting from the same initial conditions each time. The extension of these methods from the adaptive control problem to the learning control problem is seen to be trivial. The advantages and disadvantages of using learning control based on such adaptive control concepts for nonlinear systems, and the use of other currently available learning control algorithms are discussed.

  3. McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Stedry, M.H.

    1994-07-01

    McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detectormore » locations near the source.« less

  4. Automated detection of Martian water ice clouds: the Valles Marineris

    NASA Astrophysics Data System (ADS)

    Ogohara, Kazunori; Munetomo, Takafumi; Hatanaka, Yuji; Okumura, Susumu

    2016-10-01

    We need to extract water ice clouds from the large number of Mars images in order to reveal spatial and temporal variations of water ice cloud occurrence and to meteorologically understand climatology of water ice clouds. However, visible images observed by Mars orbiters for several years are too many to visually inspect each of them even though the inspection was limited to one region. Therefore, an automated detection algorithm of Martian water ice clouds is necessary for collecting ice cloud images efficiently. In addition, it may visualize new aspects of spatial and temporal variations of water ice clouds that we have never been aware. We present a method for automatically evaluating the presence of Martian water ice clouds using difference images and cross-correlation distributions calculated from blue band images of the Valles Marineris obtained by the Mars Orbiter Camera onboard the Mars Global Surveyor (MGS/MOC). We derived one subtracted image and one cross-correlation distribution from two reflectance images. The difference between the maximum and the average, variance, kurtosis, and skewness of the subtracted image were calculated. Those of the cross-correlation distribution were also calculated. These eight statistics were used as feature vectors for training Support Vector Machine, and its generalization ability was tested using 10-fold cross-validation. F-measure and accuracy tended to be approximately 0.8 if the maximum in the normalized reflectance and the difference of the maximum and the average in the cross-correlation were chosen as features. In the process of the development of the detection algorithm, we found many cases where the Valles Marineris became clearly brighter than adjacent areas in the blue band. It is at present unclear whether the bright Valles Marineris means the occurrence of water ice clouds inside the Valles Marineris or not. Therefore, subtracted images showing the bright Valles Marineris were excluded from the detection of water ice clouds

  5. Evaluation of stochastic differential equation approximation of ion channel gating models.

    PubMed

    Bruce, Ian C

    2009-04-01

    Fox and Lu derived an algorithm based on stochastic differential equations for approximating the kinetics of ion channel gating that is simpler and faster than "exact" algorithms for simulating Markov process models of channel gating. However, the approximation may not be sufficiently accurate to predict statistics of action potential generation in some cases. The objective of this study was to develop a framework for analyzing the inaccuracies and determining their origin. Simulations of a patch of membrane with voltage-gated sodium and potassium channels were performed using an exact algorithm for the kinetics of channel gating and the approximate algorithm of Fox & Lu. The Fox & Lu algorithm assumes that channel gating particle dynamics have a stochastic term that is uncorrelated, zero-mean Gaussian noise, whereas the results of this study demonstrate that in many cases the stochastic term in the Fox & Lu algorithm should be correlated and non-Gaussian noise with a non-zero mean. The results indicate that: (i) the source of the inaccuracy is that the Fox & Lu algorithm does not adequately describe the combined behavior of the multiple activation particles in each sodium and potassium channel, and (ii) the accuracy does not improve with increasing numbers of channels.

  6. Cancellation Mechanism for Dark-Matter-Nucleon Interaction.

    PubMed

    Gross, Christian; Lebedev, Oleg; Toma, Takashi

    2017-11-10

    We consider a simple Higgs portal dark-matter model, where the standard model is supplemented with a complex scalar whose imaginary part plays the role of weakly interacting massive particle dark matter (DM). We show that the direct DM detection cross section vanishes at the tree level and zero momentum transfer due to a cancellation by virtue of a softly broken symmetry. This cancellation is operative for any mediator masses. As a result, our electroweak-scale dark matter satisfies all of the phenomenological constraints quite naturally.

  7. Detecting diffusion-diffraction patterns in size distribution phantoms using double-pulsed field gradient NMR: Theory and experiments.

    PubMed

    Shemesh, Noam; Ozarslan, Evren; Basser, Peter J; Cohen, Yoram

    2010-01-21

    NMR observable nuclei undergoing restricted diffusion within confining pores are important reporters for microstructural features of porous media including, inter-alia, biological tissues, emulsions and rocks. Diffusion NMR, and especially the single-pulsed field gradient (s-PFG) methodology, is one of the most important noninvasive tools for studying such opaque samples, enabling extraction of important microstructural information from diffusion-diffraction phenomena. However, when the pores are not monodisperse and are characterized by a size distribution, the diffusion-diffraction patterns disappear from the signal decay, and the relevant microstructural information is mostly lost. A recent theoretical study predicted that the diffusion-diffraction patterns in double-PFG (d-PFG) experiments have unique characteristics, such as zero-crossings, that make them more robust with respect to size distributions. In this study, we theoretically compared the signal decay arising from diffusion in isolated cylindrical pores characterized by lognormal size distributions in both s-PFG and d-PFG methodologies using a recently presented general framework for treating diffusion in NMR experiments. We showed the gradual loss of diffusion-diffraction patterns in broadening size distributions in s-PFG and the robustness of the zero-crossings in d-PFG even for very large standard deviations of the size distribution. We then performed s-PFG and d-PFG experiments on well-controlled size distribution phantoms in which the ground-truth is well-known a priori. We showed that the microstructural information, as manifested in the diffusion-diffraction patterns, is lost in the s-PFG experiments, whereas in d-PFG experiments the zero-crossings of the signal persist from which relevant microstructural information can be extracted. This study provides a proof of concept that d-PFG may be useful in obtaining important microstructural features in samples characterized by size distributions.

  8. Evolution of genuine cross-correlation strength of focal onset seizures.

    PubMed

    Müller, Markus F; Baier, Gerold; Jiménez, Yurytzy López; Marín García, Arlex O; Rummel, Christian; Schindler, Kaspar

    2011-10-01

    To quantify the evolution of genuine zero-lag cross-correlations of focal onset seizures, we apply a recently introduced multivariate measure to broad band and to narrow-band EEG data. For frequency components below 12.5 Hz, the strength of genuine cross-correlations decreases significantly during the seizure and the immediate postseizure period, while higher frequency bands show a tendency of elevated cross-correlations during the same period. We conclude that in terms of genuine zero-lag cross-correlations, the electrical brain activity as assessed by scalp electrodes shows a significant spatial fragmentation, which might promote seizure offset.

  9. Using trend templates in a neonatal seizure algorithm improves detection of short seizures in a foetal ovine model.

    PubMed

    Zwanenburg, Alex; Andriessen, Peter; Jellema, Reint K; Niemarkt, Hendrik J; Wolfs, Tim G A M; Kramer, Boris W; Delhaas, Tammo

    2015-03-01

    Seizures below one minute in duration are difficult to assess correctly using seizure detection algorithms. We aimed to improve neonatal detection algorithm performance for short seizures through the use of trend templates for seizure onset and end. Bipolar EEG were recorded within a transiently asphyxiated ovine model at 0.7 gestational age, a common experimental model for studying brain development in humans of 30-34 weeks of gestation. Transient asphyxia led to electrographic seizures within 6-8 h. A total of 3159 seizures, 2386 shorter than one minute, were annotated in 1976 h-long EEG recordings from 17 foetal lambs. To capture EEG characteristics, five features, sensitive to seizures, were calculated and used to derive trend information. Feature values and trend information were used as input for support vector machine classification and subsequently post-processed. Performance metrics, calculated after post-processing, were compared between analyses with and without employing trend information. Detector performance was assessed after five-fold cross-validation conducted ten times with random splits. The use of trend templates for seizure onset and end in a neonatal seizure detection algorithm significantly improves the correct detection of short seizures using two-channel EEG recordings from 54.3% (52.6-56.1) to 59.5% (58.5-59.9) at FDR 2.0 (median (range); p < 0.001, Wilcoxon signed rank test). Using trend templates might therefore aid in detection of short seizures by EEG monitoring at the NICU.

  10. A Survey of Noninteractive Zero Knowledge Proof System and Its Applications

    PubMed Central

    Wu, Huixin; Wang, Feng

    2014-01-01

    Zero knowledge proof system which has received extensive attention since it was proposed is an important branch of cryptography and computational complexity theory. Thereinto, noninteractive zero knowledge proof system contains only one message sent by the prover to the verifier. It is widely used in the construction of various types of cryptographic protocols and cryptographic algorithms because of its good privacy, authentication, and lower interactive complexity. This paper reviews and analyzes the basic principles of noninteractive zero knowledge proof system, and summarizes the research progress achieved by noninteractive zero knowledge proof system on the following aspects: the definition and related models of noninteractive zero knowledge proof system, noninteractive zero knowledge proof system of NP problems, noninteractive statistical and perfect zero knowledge, the connection between noninteractive zero knowledge proof system, interactive zero knowledge proof system, and zap, and the specific applications of noninteractive zero knowledge proof system. This paper also points out the future research directions. PMID:24883407

  11. Detecting trace components in liquid chromatography/mass spectrometry data sets with two-dimensional wavelets

    NASA Astrophysics Data System (ADS)

    Compton, Duane C.; Snapp, Robert R.

    2007-09-01

    TWiGS (two-dimensional wavelet transform with generalized cross validation and soft thresholding) is a novel algorithm for denoising liquid chromatography-mass spectrometry (LC-MS) data for use in "shot-gun" proteomics. Proteomics, the study of all proteins in an organism, is an emerging field that has already proven successful for drug and disease discovery in humans. There are a number of constraints that limit the effectiveness of liquid chromatography-mass spectrometry (LC-MS) for shot-gun proteomics, where the chemical signals are typically weak, and data sets are computationally large. Most algorithms suffer greatly from a researcher driven bias, making the results irreproducible and unusable by other laboratories. We thus introduce a new algorithm, TWiGS, that removes electrical (additive white) and chemical noise from LC-MS data sets. TWiGS is developed to be a true two-dimensional algorithm, which operates in the time-frequency domain, and minimizes the amount of researcher bias. It is based on the traditional discrete wavelet transform (DWT), which allows for fast and reproducible analysis. The separable two-dimensional DWT decomposition is paired with generalized cross validation and soft thresholding. The Haar, Coiflet-6, Daubechie-4 and the number of decomposition levels are determined based on observed experimental results. Using a synthetic LC-MS data model, TWiGS accurately retains key characteristics of the peaks in both the time and m/z domain, and can detect peaks from noise of the same intensity. TWiGS is applied to angiotensin I and II samples run on a LC-ESI-TOF-MS (liquid-chromatography-electrospray-ionization) to demonstrate its utility for the detection of low-lying peaks obscured by noise.

  12. A-Track: A new approach for detection of moving objects in FITS images

    NASA Astrophysics Data System (ADS)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2016-10-01

    We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.

  13. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field

    PubMed Central

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called “virtual sensor”), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth’s magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms. PMID:27618056

  14. Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation.

    PubMed

    Zana, F; Klein, J C

    2001-01-01

    This paper presents an algorithm based on mathematical morphology and curvature evaluation for the detection of vessel-like patterns in a noisy environment. Such patterns are very common in medical images. Vessel detection is interesting for the computation of parameters related to blood flow. Its tree-like geometry makes it a usable feature for registration between images that can be of a different nature. In order to define vessel-like patterns, segmentation is performed with respect to a precise model. We define a vessel as a bright pattern, piece-wise connected, and locally linear, mathematical morphology is very well adapted to this description, however other patterns fit such a morphological description. In order to differentiate vessels from analogous background patterns, a cross-curvature evaluation is performed. They are separated out as they have a specific Gaussian-like profile whose curvature varies smoothly along the vessel. The detection algorithm that derives directly from this modeling is based on four steps: (1) noise reduction; (2) linear pattern with Gaussian-like profile improvement; (3) cross-curvature evaluation; (4) linear filtering. We present its theoretical background and illustrate it on real images of various natures, then evaluate its robustness and its accuracy with respect to noise.

  15. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.

  16. Automated circumferential construction of first-order aqueous humor outflow pathways using spectral-domain optical coherence tomography.

    PubMed

    Huang, Alex S; Belghith, Akram; Dastiridou, Anna; Chopra, Vikas; Zangwill, Linda M; Weinreb, Robert N

    2017-06-01

    The purpose was to create a three-dimensional (3-D) model of circumferential aqueous humor outflow (AHO) in a living human eye with an automated detection algorithm for Schlemm’s canal (SC) and first-order collector channels (CC) applied to spectral-domain optical coherence tomography (SD-OCT). Anterior segment SD-OCT scans from a subject were acquired circumferentially around the limbus. A Bayesian Ridge method was used to approximate the location of the SC on infrared confocal laser scanning ophthalmoscopic images with a cross multiplication tool developed to initiate SC/CC detection automated through a fuzzy hidden Markov Chain approach. Automatic segmentation of SC and initial CC’s was manually confirmed by two masked graders. Outflow pathways detected by the segmentation algorithm were reconstructed into a 3-D representation of AHO. Overall, only <1% of images (5114 total B-scans) were ungradable. Automatic segmentation algorithm performed well with SC detection 98.3% of the time and <0.1% false positive detection compared to expert grader consensus. CC was detected 84.2% of the time with 1.4% false positive detection. 3-D representation of AHO pathways demonstrated variably thicker and thinner SC with some clear CC roots. Circumferential (360 deg), automated, and validated AHO detection of angle structures in the living human eye with reconstruction was possible.

  17. A Survey on Underwater Acoustic Sensor Network Routing Protocols.

    PubMed

    Li, Ning; Martínez, José-Fernán; Meneses Chaus, Juan Manuel; Eckert, Martina

    2016-03-22

    Underwater acoustic sensor networks (UASNs) have become more and more important in ocean exploration applications, such as ocean monitoring, pollution detection, ocean resource management, underwater device maintenance, etc. In underwater acoustic sensor networks, since the routing protocol guarantees reliable and effective data transmission from the source node to the destination node, routing protocol design is an attractive topic for researchers. There are many routing algorithms have been proposed in recent years. To present the current state of development of UASN routing protocols, we review herein the UASN routing protocol designs reported in recent years. In this paper, all the routing protocols have been classified into different groups according to their characteristics and routing algorithms, such as the non-cross-layer design routing protocol, the traditional cross-layer design routing protocol, and the intelligent algorithm based routing protocol. This is also the first paper that introduces intelligent algorithm-based UASN routing protocols. In addition, in this paper, we investigate the development trends of UASN routing protocols, which can provide researchers with clear and direct insights for further research.

  18. A Survey on Underwater Acoustic Sensor Network Routing Protocols

    PubMed Central

    Li, Ning; Martínez, José-Fernán; Meneses Chaus, Juan Manuel; Eckert, Martina

    2016-01-01

    Underwater acoustic sensor networks (UASNs) have become more and more important in ocean exploration applications, such as ocean monitoring, pollution detection, ocean resource management, underwater device maintenance, etc. In underwater acoustic sensor networks, since the routing protocol guarantees reliable and effective data transmission from the source node to the destination node, routing protocol design is an attractive topic for researchers. There are many routing algorithms have been proposed in recent years. To present the current state of development of UASN routing protocols, we review herein the UASN routing protocol designs reported in recent years. In this paper, all the routing protocols have been classified into different groups according to their characteristics and routing algorithms, such as the non-cross-layer design routing protocol, the traditional cross-layer design routing protocol, and the intelligent algorithm based routing protocol. This is also the first paper that introduces intelligent algorithm-based UASN routing protocols. In addition, in this paper, we investigate the development trends of UASN routing protocols, which can provide researchers with clear and direct insights for further research. PMID:27011193

  19. Computing sparse derivatives and consecutive zeros problem

    NASA Astrophysics Data System (ADS)

    Chandra, B. V. Ravi; Hossain, Shahadat

    2013-02-01

    We describe a substitution based sparse Jacobian matrix determination method using algorithmic differentiation. Utilizing the a priori known sparsity pattern, a compression scheme is determined using graph coloring. The "compressed pattern" of the Jacobian matrix is then reordered into a form suitable for computation by substitution. We show that the column reordering of the compressed pattern matrix (so as to align the zero entries into consecutive locations in each row) can be viewed as a variant of traveling salesman problem. Preliminary computational results show that on the test problems the performance of nearest-neighbor type heuristic algorithms is highly encouraging.

  20. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials.

    PubMed

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies.

  1. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials

    PubMed Central

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies. PMID:26325291

  2. 16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION.... 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A of Part 1209—Zero Reference Point Related to Detecting...

  3. 16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION.... 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A of Part 1209—Zero Reference Point Related to Detecting...

  4. 16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION.... 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A of Part 1209—Zero Reference Point Related to Detecting...

  5. A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2009-01-01

    In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I

  6. Watch-Dog: Detecting Self-Harming Activities From Wrist Worn Accelerometers.

    PubMed

    Bharti, Pratool; Panwar, Anurag; Gopalakrishna, Ganesh; Chellappan, Sriram

    2018-05-01

    In a 2012 survey, in the United States alone, there were more than 35 000 reported suicides with approximately 1800 of being psychiatric inpatients. Recent Centers for Disease Control and Prevention (CDC) reports indicate an upward trend in these numbers. In psychiatric facilities, staff perform intermittent or continuous observation of patients manually in order to prevent such tragedies, but studies show that they are insufficient, and also consume staff time and resources. In this paper, we present the Watch-Dog system, to address the problem of detecting self-harming activities when attempted by in-patients in clinical settings. Watch-Dog comprises of three key components-Data sensed by tiny accelerometer sensors worn on wrists of subjects; an efficient algorithm to classify whether a user is active versus dormant (i.e., performing a physical activity versus not performing any activity); and a novel decision selection algorithm based on random forests and continuity indices for fine grained activity classification. With data acquired from 11 subjects performing a series of activities (both self-harming and otherwise), Watch-Dog achieves a classification accuracy of , , and for same-user 10-fold cross-validation, cross-user 10-fold cross-validation, and cross-user leave-one-out evaluation, respectively. We believe that the problem addressed in this paper is practical, important, and timely. We also believe that our proposed system is practically deployable, and related discussions are provided in this paper.

  7. Quantitative Analysis of Clopidogrel Bisulphate and Aspirin by First Derivative Spectrophotometric Method in Tablets

    PubMed Central

    Game, Madhuri D.; Gabhane, K. B.; Sakarkar, D. M.

    2010-01-01

    A simple, accurate and precise spectrophotometric method has been developed for simultaneous estimation of clopidogrel bisulphate and aspirin by employing first order derivative zero crossing method. The first order derivative absorption at 232.5 nm (zero cross point of aspirin) was used for clopidogrel bisulphate and 211.3 nm (zero cross point of clopidogrel bisulphate) for aspirin.Both the drugs obeyed linearity in the concentration range of 5.0 μg/ml to 25.0 μg/ml (correlation coefficient r2<1). No interference was found between both determined constituents and those of matrix. The method was validated statistically and recovery studies were carried out to confirm the accuracy of the method. PMID:21969765

  8. Elastic positron scattering by C{sub 2}H{sub 2}: Differential cross sections and virtual state formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carvalho, Claudia R.C. de; Varella, Marcio T. do N; Lima, Marco A.P.

    2003-12-01

    We present calculated elastic differential cross sections for positron-acetylene scattering, obtained by using the Schwinger multichannel method. Our results are in very good agreement with quasielastic experimental data of Kauppila et al. [Nucl. Instrum. Meth. Phys. Res. B 192, 162 (2002)]. We also discuss the existence of a virtual state (zero-energy resonance) in e{sup +}-C{sub 2}H{sub 2} collisions, based on the behavior of the integral cross section and of the s-wave phase shift. As expected the fixed-nuclei cross section and annihilation parameter (Z{sub eff}) present the same energy dependence at very low impact energies. As the virtual state energy approachesmore » zero, the magnitude of both cross section and Z{sub eff} are extremely enhanced (at zero impact energy). The possibility of shifting from a low-lying virtual state to a shallow bound state is not expected to significantly affect room-temperature annihilation rates.« less

  9. Lifted worm algorithm for the Ising model

    NASA Astrophysics Data System (ADS)

    Elçi, Eren Metin; Grimm, Jens; Ding, Lijie; Nasrawi, Abrahim; Garoni, Timothy M.; Deng, Youjin

    2018-04-01

    We design an irreversible worm algorithm for the zero-field ferromagnetic Ising model by using the lifting technique. We study the dynamic critical behavior of an energylike observable on both the complete graph and toroidal grids, and compare our findings with reversible algorithms such as the Prokof'ev-Svistunov worm algorithm. Our results show that the lifted worm algorithm improves the dynamic exponent of the energylike observable on the complete graph and leads to a significant constant improvement on toroidal grids.

  10. Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models

    NASA Technical Reports Server (NTRS)

    Mjoisness, Eric; Castano, Rebecca; Gray, Alexander

    1999-01-01

    We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.

  11. Vital sign sensing method based on EMD in terahertz band

    NASA Astrophysics Data System (ADS)

    Xu, Zhengwu; Liu, Tong

    2014-12-01

    Non-contact respiration and heartbeat rates detection could be applied to find survivors trapped in the disaster or the remote monitoring of the respiration and heartbeat of a patient. This study presents an improved algorithm that extracts the respiration and heartbeat rates of humans by utilizing the terahertz radar, which further lessens the effects of noise, suppresses the cross-term, and enhances the detection accuracy. A human target echo model for the terahertz radar is first presented. Combining the over-sampling method, low-pass filter, and Empirical Mode Decomposition improves the signal-to-noise ratio. The smoothed pseudo Wigner-Ville distribution time-frequency technique and the centroid of the spectrogram are used to estimate the instantaneous velocity of the target's cardiopulmonary motion. The down-sampling method is adopted to prevent serious distortion. Finally, a second time-frequency analysis is applied to the centroid curve to extract the respiration and heartbeat rates of the individual. Simulation results show that compared with the previously presented vital sign sensing method, the improved algorithm enhances the signal-to-noise ratio to 1 dB with a detection accuracy of 80%. The improved algorithm is an effective approach for the detection of respiration and heartbeat signal in a complicated environment.

  12. Absence of asymptomatic cases of malaria in a historically endemic indigenous locality of the Department of Caaguazú, Paraguay: moving toward elimination.

    PubMed

    Barrios, Eugenia Duarte de; Russomando, Graciela; Puerto, Florencia Del

    2016-01-01

    Paraguay was among the 16 countries that reported zero indigenous malaria cases in 2014. A cross-sectional observational descriptive study was performed in 100 adults from Santa Teresa, Paraguay. Parasite detection was carried out using seminested multiplex polymerase chain reaction (PCR) and microscopy. Among the participants, 44% were female and 56% were male, and 89% had a malaria history. No parasites were detected with either of the methods. There were no asymptomatic cases in Santa Teresa, and this finding is very promising. A longitudinal study should be performed to confirm that there are no asymptomatic cases in this locality.

  13. Analyzing powers in the three-body break-up reactions from 3overlineHe + 2H

    NASA Astrophysics Data System (ADS)

    Okumuşoǧlu, Nazmi T.; Basak, A. K.; Blyth, C. O.

    1980-11-01

    Analyzing powers and cross sections of the 2H( 3overlineHe, pp ) 3H, 2H( 3overlineHe, pt) 1H and2H( 3overlineHe, p 3He) n reactions have been measured as a function of the energy of the detected proton. Two of the outgoing particles were identified and detected in coincidence with several forward-angle geometries. The analyzing powers found were generally small but non-zero. However, at kinematical conditions favoring the final-state interactions between proton and triton (or neutron and helion) large values up to 0.4 were found. The results are discussed with respect to the level structure of 4He.

  14. Online Solution of Two-Player Zero-Sum Games for Continuous-Time Nonlinear Systems With Completely Unknown Dynamics.

    PubMed

    Fu, Yue; Chai, Tianyou

    2016-12-01

    Regarding two-player zero-sum games of continuous-time nonlinear systems with completely unknown dynamics, this paper presents an online adaptive algorithm for learning the Nash equilibrium solution, i.e., the optimal policy pair. First, for known systems, the simultaneous policy updating algorithm (SPUA) is reviewed. A new analytical method to prove the convergence is presented. Then, based on the SPUA, without using a priori knowledge of any system dynamics, an online algorithm is proposed to simultaneously learn in real time either the minimal nonnegative solution of the Hamilton-Jacobi-Isaacs (HJI) equation or the generalized algebraic Riccati equation for linear systems as a special case, along with the optimal policy pair. The approximate solution to the HJI equation and the admissible policy pair is reexpressed by the approximation theorem. The unknown constants or weights of each are identified simultaneously by resorting to the recursive least square method. The convergence of the online algorithm to the optimal solutions is provided. A practical online algorithm is also developed. Simulation results illustrate the effectiveness of the proposed method.

  15. Improved multivariate polynomial factoring algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P.S.

    1978-10-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timingmore » are included.« less

  16. Synchronous Controlled Switching by VCB with Electromagnetic Operation Mechanism

    NASA Astrophysics Data System (ADS)

    Horinouchi, Katsuhiko; Tsukima, Mitsuru; Tohya, Nobumoto; Inoue, Ryuuichi; Sasao, Hiroyuki

    Synchronously controlled switching to suppress transient overvoltage and overcurrent resulting from when the circuit breakers on medium voltage systems are closed is described. Firstly, by simulation it is found that if the closing time is synchronously controlled so that the contacts of the circuit breaker close completely at the instant when the voltage across contacts of the breaker at each of the three individual phases are zero, the resulting overvoltage and overcurrent is significantly suppressed when compared to conventional three phase simultaneous closing. Next, an algorithm for determining the closing timing based on a forecasted voltage zero waveform, obtained from voltage sampling data, is presented. Finally, a synchronous closing experiment of voltage 22kV utilizing a controller to implement the algorithm and a VCB with an electromagnetic operation mechanism is presented. The VCB was successfully closed at the zero point within a tolerance range of 200 microseconds.

  17. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  18. Zero-crossing sampling of Fourier-transform interferograms and spectrum reconstruction using the real-zero interpolation method.

    PubMed

    Minami, K; Kawata, S; Minami, S

    1992-10-10

    The real-zero interpolation method is applied to a Fourier-transformed infrared (FT-IR) interferogram. With this method an interferogram is reconstructed from its zero-crossing information only, without the use of a long-word analog-to-digital converter. We installed a phase-locked loop circuit into an FT-IR spectrometer for oversampling the interferogram. Infrared absorption spectra of polystyrene and Mylar films were measured as binary interferograms by the FT-IR spectrometer, which was equipped with the developed circuits, and their Fourier spectra were successfully reconstructed. The relationship of the oversampling ratio to the dynamic range of the reconstructed interferogram was evaluated through computer simulations. We also discuss the problems of this method for practical applications.

  19. Predicting crash-relevant violations at stop sign-controlled intersections for the development of an intersection driver assistance system.

    PubMed

    Scanlon, John M; Sherony, Rini; Gabler, Hampton C

    2016-09-01

    Intersection crashes resulted in over 5,000 fatalities in the United States in 2014. Intersection Advanced Driver Assistance Systems (I-ADAS) are active safety systems that seek to help drivers safely traverse intersections. I-ADAS uses onboard sensors to detect oncoming vehicles and, in the event of an imminent crash, can either alert the driver or take autonomous evasive action. The objective of this study was to develop and evaluate a predictive model for detecting whether a stop sign violation was imminent. Passenger vehicle intersection approaches were extracted from a data set of typical driver behavior (100-Car Naturalistic Driving Study) and violations (event data recorders downloaded from real-world crashes) and were assigned weighting factors based on real-world frequency. A k-fold cross-validation procedure was then used to develop and evaluate 3 hypothetical stop sign warning algorithms (i.e., early, intermediate, and delayed) for detecting an impending violation during the intersection approach. Violation detection models were developed using logistic regression models that evaluate likelihood of a violation at various locations along the intersection approach. Two potential indicators of driver intent to stop-that is, required deceleration parameter (RDP) and brake application-were used to develop the predictive models. The earliest violation detection opportunity was then evaluated for each detection algorithm in order to (1) evaluate the violation detection accuracy and (2) compare braking demand versus maximum braking capabilities. A total of 38 violating and 658 nonviolating approaches were used in the analysis. All 3 algorithms were able to detect a violation at some point during the intersection approach. The early detection algorithm, as designed, was able to detect violations earlier than all other algorithms during the intersection approach but gave false alarms for 22.3% of approaches. In contrast, the delayed detection algorithm sacrificed some time for detecting violations but was able to substantially reduce false alarms to only 3.3% of all nonviolating approaches. Given good surface conditions (maximum braking capabilities = 0.8 g) and maximum effort, most drivers (55.3-71.1%) would be able to stop the vehicle regardless of the detection algorithm. However, given poor surface conditions (maximum braking capabilities = 0.4 g), few drivers (10.5-26.3%) would be able to stop the vehicle. Automatic emergency braking (AEB) would allow for early braking prior to driver reaction. If equipped with an AEB system, the results suggest that, even for the poor surface conditions scenario, over one half (55.3-65.8%) of the vehicles could have been stopped. This study demonstrates the potential of I-ADAS to incorporate a stop sign violation detection algorithm. Repeating the analysis on a larger, more extensive data set will allow for the development of a more comprehensive algorithm to further validate the findings.

  20. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less

  1. Feasibility Study of a Generalized Framework for Developing Computer-Aided Detection Systems-a New Paradigm.

    PubMed

    Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu

    2017-10-01

    We propose a generalized framework for developing computer-aided detection (CADe) systems whose characteristics depend only on those of the training dataset. The purpose of this study is to show the feasibility of the framework. Two different CADe systems were experimentally developed by a prototype of the framework, but with different training datasets. The CADe systems include four components; preprocessing, candidate area extraction, candidate detection, and candidate classification. Four pretrained algorithms with dedicated optimization/setting methods corresponding to the respective components were prepared in advance. The pretrained algorithms were sequentially trained in the order of processing of the components. In this study, two different datasets, brain MRA with cerebral aneurysms and chest CT with lung nodules, were collected to develop two different types of CADe systems in the framework. The performances of the developed CADe systems were evaluated by threefold cross-validation. The CADe systems for detecting cerebral aneurysms in brain MRAs and for detecting lung nodules in chest CTs were successfully developed using the respective datasets. The framework was shown to be feasible by the successful development of the two different types of CADe systems. The feasibility of this framework shows promise for a new paradigm in the development of CADe systems: development of CADe systems without any lesion specific algorithm designing.

  2. Absolute phase estimation: adaptive local denoising and global unwrapping.

    PubMed

    Bioucas-Dias, Jose; Katkovnik, Vladimir; Astola, Jaakko; Egiazarian, Karen

    2008-10-10

    The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details. (c) 2008 Optical Society of America

  3. The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.

    PubMed

    Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente

    2015-08-10

    Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively.

  4. The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic

    PubMed Central

    Li, Ning; Martínez, José-Fernán; Díaz, Vicente Hernández

    2015-01-01

    Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412

  5. Effect of window length on performance of the elbow-joint angle prediction based on electromyography

    NASA Astrophysics Data System (ADS)

    Triwiyanto; Wahyunggoro, Oyas; Adi Nugroho, Hanung; Herianto

    2017-05-01

    The high performance of the elbow joint angle prediction is essential on the development of the devices based on electromyography (EMG) control. The performance of the prediction depends on the feature of extraction parameters such as window length. In this paper, we evaluated the effect of the window length on the performance of the elbow-joint angle prediction. The prediction algorithm consists of zero-crossing feature extraction and second order of Butterworth low pass filter. The feature was used to extract the EMG signal by varying window length. The EMG signal was collected from the biceps muscle while the elbow was moved in the flexion and extension motion. The subject performed the elbow motion by holding a 1-kg load and moved the elbow in different periods (12 seconds, 8 seconds and 6 seconds). The results indicated that the window length affected the performance of the prediction. The 250 window lengths yielded the best performance of the prediction algorithm of (mean±SD) root mean square error = 5.68%±1.53% and Person’s correlation = 0.99±0.0059.

  6. A robust algorithm for automated target recognition using precomputed radar cross sections

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2004-09-01

    Passive radar is an emerging technology that offers a number of unique benefits, including covert operation. Many such systems are already capable of detecting and tracking aircraft. The goal of this work is to develop a robust algorithm for adding automated target recognition (ATR) capabilities to existing passive radar systems. In previous papers, we proposed conducting ATR by comparing the precomputed RCS of known targets to that of detected targets. To make the precomputed RCS as accurate as possible, a coordinated flight model is used to estimate aircraft orientation. Once the aircraft's position and orientation are known, it is possible to determine the incident and observed angles on the aircraft, relative to the transmitter and receiver. This makes it possible to extract the appropriate radar cross section (RCS) from our simulated database. This RCS is then scaled to account for propagation losses and the receiver's antenna gain. A Rician likelihood model compares these expected signals from different targets to the received target profile. We have previously employed Monte Carlo runs to gauge the probability of error in the ATR algorithm; however, generation of a statistically significant set of Monte Carlo runs is computationally intensive. As an alternative to Monte Carlo runs, we derive the relative entropy (also known as Kullback-Liebler distance) between two Rician distributions. Since the probability of Type II error in our hypothesis testing problem can be expressed as a function of the relative entropy via Stein's Lemma, this provides us with a computationally efficient method for determining an upper bound on our algorithm's performance. It also provides great insight into the types of classification errors we can expect from our algorithm. This paper compares the numerically approximated probability of Type II error with the results obtained from a set of Monte Carlo runs.

  7. Symmetrical group theory for mathematical complexity reduction of digital holograms

    NASA Astrophysics Data System (ADS)

    Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.

    2017-10-01

    This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.

  8. Robust Adaptive Dynamic Programming of Two-Player Zero-Sum Games for Continuous-Time Linear Systems.

    PubMed

    Fu, Yue; Fu, Jun; Chai, Tianyou

    2015-12-01

    In this brief, an online robust adaptive dynamic programming algorithm is proposed for two-player zero-sum games of continuous-time unknown linear systems with matched uncertainties, which are functions of system outputs and states of a completely unknown exosystem. The online algorithm is developed using the policy iteration (PI) scheme with only one iteration loop. A new analytical method is proposed for convergence proof of the PI scheme. The sufficient conditions are given to guarantee globally asymptotic stability and suboptimal property of the closed-loop system. Simulation studies are conducted to illustrate the effectiveness of the proposed method.

  9. Mutual Information in Frequency and Its Application to Measure Cross-Frequency Coupling in Epilepsy

    NASA Astrophysics Data System (ADS)

    Malladi, Rakesh; Johnson, Don H.; Kalamangalam, Giridhar P.; Tandon, Nitin; Aazhang, Behnaam

    2018-06-01

    We define a metric, mutual information in frequency (MI-in-frequency), to detect and quantify the statistical dependence between different frequency components in the data, referred to as cross-frequency coupling and apply it to electrophysiological recordings from the brain to infer cross-frequency coupling. The current metrics used to quantify the cross-frequency coupling in neuroscience cannot detect if two frequency components in non-Gaussian brain recordings are statistically independent or not. Our MI-in-frequency metric, based on Shannon's mutual information between the Cramer's representation of stochastic processes, overcomes this shortcoming and can detect statistical dependence in frequency between non-Gaussian signals. We then describe two data-driven estimators of MI-in-frequency: one based on kernel density estimation and the other based on the nearest neighbor algorithm and validate their performance on simulated data. We then use MI-in-frequency to estimate mutual information between two data streams that are dependent across time, without making any parametric model assumptions. Finally, we use the MI-in- frequency metric to investigate the cross-frequency coupling in seizure onset zone from electrocorticographic recordings during seizures. The inferred cross-frequency coupling characteristics are essential to optimize the spatial and spectral parameters of electrical stimulation based treatments of epilepsy.

  10. Automatic detection of multi-level acetowhite regions in RGB color images of the uterine cervix

    NASA Astrophysics Data System (ADS)

    Lange, Holger

    2005-04-01

    Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method used to detect cancer precursors and cancer of the uterine cervix, whereby a physician (colposcopist) visually inspects the metaplastic epithelium on the cervix for certain distinctly abnormal morphologic features. A contrast agent, a 3-5% acetic acid solution, is used, causing abnormal and metaplastic epithelia to turn white. The colposcopist considers diagnostic features such as the acetowhite, blood vessel structure, and lesion margin to derive a clinical diagnosis. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD, a complex image analysis system that at its core assesses the same visual features as used by colposcopists. The acetowhite feature has been identified as one of the most important individual predictors of lesion severity. Here, we present the details and preliminary results of a multi-level acetowhite region detection algorithm for RGB color images of the cervix, including the detection of the anatomic features: cervix, os and columnar region, which are used for the acetowhite region detection. The RGB images are assumed to be glare free, either obtained by cross-polarized image acquisition or glare removal pre-processing. The basic approach of the algorithm is to extract a feature image from the RGB image that provides a good acetowhite to cervix background ratio, to segment the feature image using novel pixel grouping and multi-stage region-growing algorithms that provide region segmentations with different levels of detail, to extract the acetowhite regions from the region segmentations using a novel region selection algorithm, and then finally to extract the multi-levels from the acetowhite regions using multiple thresholds. The performance of the algorithm is demonstrated using human subject data.

  11. Detection of nasopharyngeal cancer using confocal Raman spectroscopy and genetic algorithm technique

    NASA Astrophysics Data System (ADS)

    Li, Shao-Xin; Chen, Qiu-Yan; Zhang, Yan-Jiao; Liu, Zhi-Ming; Xiong, Hong-Lian; Guo, Zhou-Yi; Mai, Hai-Qiang; Liu, Song-Hao

    2012-12-01

    Raman spectroscopy (RS) and a genetic algorithm (GA) were applied to distinguish nasopharyngeal cancer (NPC) from normal nasopharyngeal tissue. A total of 225 Raman spectra are acquired from 120 tissue sites of 63 nasopharyngeal patients, 56 Raman spectra from normal tissue and 169 Raman spectra from NPC tissue. The GA integrated with linear discriminant analysis (LDA) is developed to differentiate NPC and normal tissue according to spectral variables in the selected regions of 792-805, 867-880, 996-1009, 1086-1099, 1288-1304, 1663-1670, and 1742-1752 cm-1 related to proteins, nucleic acids and lipids of tissue. The GA-LDA algorithms with the leave-one-out cross-validation method provide a sensitivity of 69.2% and specificity of 100%. The results are better than that of principal component analysis which is applied to the same Raman dataset of nasopharyngeal tissue with a sensitivity of 63.3% and specificity of 94.6%. This demonstrates that Raman spectroscopy associated with GA-LDA diagnostic algorithm has enormous potential to detect and diagnose nasopharyngeal cancer.

  12. Spectral areas and ratios classifier algorithm for pancreatic tissue classification using optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Chandra, Malavika; Scheiman, James; Simeone, Diane; McKenna, Barbara; Purdy, Julianne; Mycek, Mary-Ann

    2010-01-01

    Pancreatic adenocarcinoma is one of the leading causes of cancer death, in part because of the inability of current diagnostic methods to reliably detect early-stage disease. We present the first assessment of the diagnostic accuracy of algorithms developed for pancreatic tissue classification using data from fiber optic probe-based bimodal optical spectroscopy, a real-time approach that would be compatible with minimally invasive diagnostic procedures for early cancer detection in the pancreas. A total of 96 fluorescence and 96 reflectance spectra are considered from 50 freshly excised tissue sites-including human pancreatic adenocarcinoma, chronic pancreatitis (inflammation), and normal tissues-on nine patients. Classification algorithms using linear discriminant analysis are developed to distinguish among tissues, and leave-one-out cross-validation is employed to assess the classifiers' performance. The spectral areas and ratios classifier (SpARC) algorithm employs a combination of reflectance and fluorescence data and has the best performance, with sensitivity, specificity, negative predictive value, and positive predictive value for correctly identifying adenocarcinoma being 85, 89, 92, and 80%, respectively.

  13. Implementation of several mathematical algorithms to breast tissue density classification

    NASA Astrophysics Data System (ADS)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-02-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.

  14. Generation of three-dimensional delaunay meshes from weakly structured and inconsistent data

    NASA Astrophysics Data System (ADS)

    Garanzha, V. A.; Kudryavtseva, L. N.

    2012-03-01

    A method is proposed for the generation of three-dimensional tetrahedral meshes from incomplete, weakly structured, and inconsistent data describing a geometric model. The method is based on the construction of a piecewise smooth scalar function defining the body so that its boundary is the zero isosurface of the function. Such implicit description of three-dimensional domains can be defined analytically or can be constructed from a cloud of points, a set of cross sections, or a "soup" of individual vertices, edges, and faces. By applying Boolean operations over domains, simple primitives can be combined with reconstruction results to produce complex geometric models without resorting to specialized software. Sharp edges and conical vertices on the domain boundary are reproduced automatically without using special algorithms. Refs. 42. Figs. 25.

  15. Bayesian longitudinal segmentation of hippocampal substructures in brain MRI using subject-specific atlases.

    PubMed

    Iglesias, Juan Eugenio; Van Leemput, Koen; Augustinack, Jean; Insausti, Ricardo; Fischl, Bruce; Reuter, Martin

    2016-11-01

    The hippocampal formation is a complex, heterogeneous structure that consists of a number of distinct, interacting subregions. Atrophy of these subregions is implied in a variety of neurodegenerative diseases, most prominently in Alzheimer's disease (AD). Thanks to the increasing resolution of MR images and computational atlases, automatic segmentation of hippocampal subregions is becoming feasible in MRI scans. Here we introduce a generative model for dedicated longitudinal segmentation that relies on subject-specific atlases. The segmentations of the scans at the different time points are jointly computed using Bayesian inference. All time points are treated the same to avoid processing bias. We evaluate this approach using over 4700 scans from two publicly available datasets (ADNI and MIRIAD). In test-retest reliability experiments, the proposed method yielded significantly lower volume differences and significantly higher Dice overlaps than the cross-sectional approach for nearly every subregion (average across subregions: 4.5% vs. 6.5%, Dice overlap: 81.8% vs. 75.4%). The longitudinal algorithm also demonstrated increased sensitivity to group differences: in MIRIAD (69 subjects: 46 with AD and 23 controls), it found differences in atrophy rates between AD and controls that the cross sectional method could not detect in a number of subregions: right parasubiculum, left and right presubiculum, right subiculum, left dentate gyrus, left CA4, left HATA and right tail. In ADNI (836 subjects: 369 with AD, 215 with early cognitive impairment - eMCI - and 252 controls), all methods found significant differences between AD and controls, but the proposed longitudinal algorithm detected differences between controls and eMCI and differences between eMCI and AD that the cross sectional method could not find: left presubiculum, right subiculum, left and right parasubiculum, left and right HATA. Moreover, many of the differences that the cross-sectional method already found were detected with higher significance. The presented algorithm will be made available as part of the open-source neuroimaging package FreeSurfer. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  17. Determining the bias and variance of a deterministic finger-tracking algorithm.

    PubMed

    Morash, Valerie S; van der Velden, Bas H M

    2016-06-01

    Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.

  18. Category-Specific Comparison of Univariate Alerting Methods for Biosurveillance Decision Support

    PubMed Central

    Elbert, Yevgeniy; Hung, Vivian; Burkom, Howard

    2013-01-01

    Objective For a multi-source decision support application, we sought to match univariate alerting algorithms to surveillance data types to optimize detection performance. Introduction Temporal alerting algorithms commonly used in syndromic surveillance systems are often adjusted for data features such as cyclic behavior but are subject to overfitting or misspecification errors when applied indiscriminately. In a project for the Armed Forces Health Surveillance Center to enable multivariate decision support, we obtained 4.5 years of out-patient, prescription and laboratory test records from all US military treatment facilities. A proof-of-concept project phase produced 16 events with multiple evidence corroboration for comparison of alerting algorithms for detection performance. We used the representative streams from each data source to compare sensitivity of 6 algorithms to injected spikes, and we used all data streams from 16 known events to compare them for detection timeliness. Methods The six methods compared were: Holt-Winters generalized exponential smoothing method (1)automated choice between daily methods, regression and an exponential weighted moving average (2)adaptive daily Shewhart-type chartadaptive one-sided daily CUSUMEWMA applied to 7-day means with a trend correction; and7-day temporal scan statistic Sensitivity testing: We conducted comparative sensitivity testing for categories of time series with similar scales and seasonal behavior. We added multiples of the standard deviation of each time series as single-day injects in separate algorithm runs. For each candidate method, we then used as a sensitivity measure the proportion of these runs for which the output of each algorithm was below alerting thresholds estimated empirically for each algorithm using simulated data streams. We identified the algorithm(s) whose sensitivity was most consistently high for each data category. For each syndromic query applied to each data source (outpatient, lab test orders, and prescriptions), 502 authentic time series were derived, one for each reporting treatment facility. Data categories were selected in order to group time series with similar expected algorithm performance: Median > 100 < Median ≤ 10Median = 0Lag 7 Autocorrelation Coefficient ≥ 0.2Lag 7 Autocorrelation Coefficient < 0.2 Timeliness testing: For the timeliness testing, we avoided artificiality of simulated signals by measuring alerting detection delays in the 16 corroborated outbreaks. The multiple time series from these events gave a total of 141 time series with outbreak intervals for timeliness testing. The following measures were computed to quantify timeliness of detection: Median Detection Delay – median number of days to detect the outbreak.Penalized Mean Detection Delay –mean number of days to detect the outbreak with outbreak misses penalized as 1 day plus the maximum detection time. Results Based on the injection results, the Holt-Winters algorithm was most sensitive among time series with positive medians. The adaptive CUSUM and the Shewhart methods were most sensitive for data streams with median zero. Table 1 provides timeliness results using the 141 outbreak-associated streams on sparse (Median=0) and non-sparse data categories. [Insert table #1 here] Data median Detection Delay, days Holt-winters Regression EWMA Adaptive Shewhart Adaptive CUSUM 7-day Trend-adj. EWMA 7-day Temporal Scan Median 0 Median 3 2 4 2 4.5 2 Penalized Mean 7.2 7 6.6 6.2 7.3 7.6 Median >0 Median 2 2 2.5 2 6 4 Penalized Mean 6.1 7 7.2 7.1 7.7 6.6 The gray shading in the table 1 indicates methods with shortest detection delays for sparse and non-sparse data streams. The Holt-Winters method was again superior for non-sparse data. For data with median=0, the adaptive CUSUM was superior for a daily false alarm probability of 0.01, but the Shewhart method was timelier for more liberal thresholds. Conclusions Both kinds of detection performance analysis showed the method based on Holt-Winters exponential smoothing superior on non-sparse time series with day-of-week effects. The adaptive CUSUM and She-whart methods proved optimal on sparse data and data without weekly patterns.

  19. Mass detection with digitized screening mammograms by using Gabor features

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Agyepong, Kwabena

    2007-03-01

    Breast cancer is the leading cancer among American women. The current lifetime risk of developing breast cancer is 13.4% (one in seven). Mammography is the most effective technology presently available for breast cancer screening. With digital mammograms computer-aided detection (CAD) has proven to be a useful tool for radiologists. In this paper, we focus on mass detection that is a common category of breast cancers relative to calcification and architecture distortion. We propose a new mass detection algorithm utilizing Gabor filters, termed as "Gabor Mass Detection" (GMD). There are three steps in the GMD algorithm, (1) preprocessing, (2) generating alarms and (3) classification (reducing false alarms). Down-sampling, quantization, denoising and enhancement are done in the preprocessing step. Then a total of 30 Gabor filtered images (along 6 bands by 5 orientations) are produced. Alarm segments are generated by thresholding four Gabor images of full orientations (Stage-I classification) with image-dependent thresholds computed via histogram analysis. Next a set of edge histogram descriptors (EHD) are extracted from 24 Gabor images (6 by 4) that will be used for Stage-II classification. After clustering EHD features with fuzzy C-means clustering method, a k-nearest neighbor classifier is used to reduce the number of false alarms. We initially analyzed 431 digitized mammograms (159 normal images vs. 272 cancerous images, from the DDSM project, University of South Florida) with the proposed GMD algorithm. And a ten-fold cross validation was used for testing the GMD algorithm upon the available data. The GMD performance is as follows: sensitivity (true positive rate) = 0.88 at false positives per image (FPI) = 1.25, and the area under the ROC curve = 0.83. The overall performance of the GMD algorithm is satisfactory and the accuracy of locating masses (highlighting the boundaries of suspicious areas) is relatively high. Furthermore, the GMD algorithm can successfully detect early-stage (with small values of Assessment & low Subtlety) malignant masses. In addition, Gabor filtered images are used in both stages of classifications, which greatly simplifies the GMD algorithm.

  20. The digital step edge

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.

    1982-01-01

    The facet model was used to accomplish step edge detection. The essence of the facet model is that any analysis made on the basis of the pixel values in some neighborhood has its final authoritative interpretation relative to the underlying grey tone intensity surface of which the neighborhood pixel values are observed noisy samples. Pixels which are part of regions have simple grey tone intensity surfaces over their areas. Pixels which have an edge in them have complex grey tone intensity surfaces over their areas. Specially, an edge moves through a pixel only if there is some point in the pixel's area having a zero crossing of the second directional derivative taken in the direction of a non-zero gradient at the pixel's center. To determine whether or not a pixel should be marked as a step edge pixel, its underlying grey tone intensity surface was estimated on the basis of the pixels in its neighborhood.

  1. Research on Debonding Defects in Thermal Barrier Coatings Structure by Thermal-Wave Radar Imaging (TWRI)

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Liu, Junyan; Mohummad, Oliullah; Wang, Yang

    2018-06-01

    In this paper, thermal-wave radar imaging (TWRI) is introduced to detect debonding defects in SiC-coated Ni-based superalloy plates. Linear frequency modulation signal (chirp) is used as the excitation signal which has a large time-bandwidth product. Artificial debonding defects in SiC coating are excited by the laser beam with the light intensity modulated by a chirp signal. Cross-correlation algorithm and chirp lock-in algorithm are introduced to extract the thermal-wave signal characteristic. The comparative experiment between TWRI reflection mode and transmission mode was carried out. Experiments are conducted to investigate the influence of laser power density, chirp period, and excitation frequency. Experimental results illustrate that chirp lock-in phase has a better detection capability than other characteristic parameters. TWRI can effectively detect simulated debonding defects of SiC-coated Ni-based superalloy plates.

  2. An efficient algorithm for planar drawing of RNA structures with pseudoknots of any type.

    PubMed

    Byun, Yanga; Han, Kyungsook

    2016-06-01

    An RNA pseudoknot is a tertiary structural element in which bases of a loop pair with complementary bases are outside the loop. A drawing of RNA secondary structures is a tree, but a drawing of RNA pseudoknots is a graph that has an inner cycle within a pseudoknot and possibly outer cycles formed between the pseudoknot and other structural elements. Visualizing a large-scale RNA structure with pseudoknots as a planar drawing is challenging because a planar drawing of an RNA structure requires both pseudoknots and an entire structure enclosing the pseudoknots to be embedded into a plane without overlapping or crossing. This paper presents an efficient heuristic algorithm for visualizing a pseudoknotted RNA structure as a planar drawing. The algorithm consists of several parts for finding crossing stems and page mapping the stems, for the layout of stem-loops and pseudoknots, and for overlap detection between structural elements and resolving it. Unlike previous algorithms, our algorithm generates a planar drawing for a large RNA structure with pseudoknots of any type and provides a bracket view of the structure. It generates a compact and aesthetic structure graph for a large pseudoknotted RNA structure in O([Formula: see text]) time, where n is the number of stems of the RNA structure.

  3. Towards Real-Time Maneuver Detection: Automatic State and Dynamics Estimation with the Adaptive Optimal Control Based Estimator

    NASA Astrophysics Data System (ADS)

    Lubey, D.; Scheeres, D.

    Tracking objects in Earth orbit is fraught with complications. This is due to the large population of orbiting spacecraft and debris that continues to grow, passive (i.e. no direct communication) and data-sparse observations, and the presence of maneuvers and dynamics mismodeling. Accurate orbit determination in this environment requires an algorithm to capture both a system's state and its state dynamics in order to account for mismodelings. Previous studies by the authors yielded an algorithm called the Optimal Control Based Estimator (OCBE) - an algorithm that simultaneously estimates a system's state and optimal control policies that represent dynamic mismodeling in the system for an arbitrary orbit-observer setup. The stochastic properties of these estimated controls are then used to determine the presence of mismodelings (maneuver detection), as well as characterize and reconstruct the mismodelings. The purpose of this paper is to develop the OCBE into an accurate real-time orbit tracking and maneuver detection algorithm by automating the algorithm and removing its linear assumptions. This results in a nonlinear adaptive estimator. In its original form the OCBE had a parameter called the assumed dynamic uncertainty, which is selected by the user with each new measurement to reflect the level of dynamic mismodeling in the system. This human-in-the-loop approach precludes real-time application to orbit tracking problems due to their complexity. This paper focuses on the Adaptive OCBE, a version of the estimator where the assumed dynamic uncertainty is chosen automatically with each new measurement using maneuver detection results to ensure that state uncertainties are properly adjusted to account for all dynamic mismodelings. The paper also focuses on a nonlinear implementation of the estimator. Originally, the OCBE was derived from a nonlinear cost function then linearized about a nominal trajectory, which is assumed to be ballistic (i.e. the nominal optimal control policy is zero for all times). In this paper, we relax this assumption on the nominal trajectory in order to allow for controlled nominal trajectories. This allows the estimator to be iterated to obtain a more accurate nonlinear solution for both the state and control estimates. Beyond these developments to the estimator, this paper also introduces a modified distance metric for maneuver detection. The original metric used in the OCBE only accounted for the estimated control and its uncertainty. This new metric accounts for measurement deviation and a priori state deviations, such that it accounts for all three major forms of uncertainty in orbit determination. This allows the user to understand the contributions of each source of uncertainty toward the total system mismodeling so that the user can properly account for them. Together these developments create an accurate orbit determination algorithm that is automated, robust to mismodeling, and capable of detecting and reconstructing the presence of mismodeling. These qualities make this algorithm a good foundation from which to approach the problem of real-time maneuver detection and reconstruction for Space Situational Awareness applications. This is further strengthened by the algorithm's general formulation that allows it to be applied to problems with an arbitrary target and observer.

  4. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography

    PubMed Central

    Tweedell, Andrew J.; Haynes, Courtney A.

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897

  5. A mobile phone based tool to identify symptoms of common childhood diseases in Ghana: development and evaluation of the integrated clinical algorithm in a cross-sectional study.

    PubMed

    Franke, Konstantin H; Krumkamp, Ralf; Mohammed, Aliyu; Sarpong, Nimako; Owusu-Dabo, Ellis; Brinkel, Johanna; Fobil, Julius N; Marinovic, Axel Bonacic; Asihene, Philip; Boots, Mark; May, Jürgen; Kreuels, Benno

    2018-03-27

    The aim of this study was the development and evaluation of an algorithm-based diagnosis-tool, applicable on mobile phones, to support guardians in providing appropriate care to sick children. The algorithm was developed on the basis of the Integrated Management of Childhood Illness (IMCI) guidelines and evaluated at a hospital in Ghana. Two hundred and thirty-seven guardians applied the tool to assess their child's symptoms. Data recorded by the tool and health records completed by a physician were compared in terms of symptom detection, disease assessment and treatment recommendation. To compare both assessments, Kappa statistics and predictive values were calculated. The tool detected the symptoms of cough, fever, diarrhoea and vomiting with good agreement to the physicians' findings (kappa = 0.64; 0.59; 0.57 and 0.42 respectively). The disease assessment barely coincided with the physicians' findings. The tool's treatment recommendation correlated with the physicians' assessments in 93 out of 237 cases (39.2% agreement, kappa = 0.11), but underestimated a child's condition in only seven cases (3.0%). The algorithm-based tool achieved reliable symptom detection and treatment recommendations were administered conformably to the physicians' assessment. Testing in domestic environment is envisaged.

  6. Alginate cryogel based glucose biosensor

    NASA Astrophysics Data System (ADS)

    Fatoni, Amin; Windy Dwiasi, Dian; Hermawan, Dadan

    2016-02-01

    Cryogel is macroporous structure provides a large surface area for biomolecule immobilization. In this work, an alginate cryogel based biosensor was developed to detect glucose. The cryogel was prepared using alginate cross-linked by calcium chloride under sub-zero temperature. This porous structure was growth in a 100 μL micropipette tip with a glucose oxidase enzyme entrapped inside the cryogel. The glucose detection was based on the colour change of redox indicator, potassium permanganate, by the hydrogen peroxide resulted from the conversion of glucose. The result showed a porous structure of alginate cryogel with pores diameter of 20-50 μm. The developed glucose biosensor was showed a linear response in the glucose detection from 1.0 to 5.0 mM with a regression of y = 0.01x+0.02 and R2 of 0.994. Furthermore, the glucose biosensor was showed a high operational stability up to 10 times of uninterrupted glucose detections.

  7. Classification of hyperbolic singularities of rank zero of integrable Hamiltonian systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oshemkov, Andrey A

    2010-10-06

    A complete invariant is constructed that is a solution of the problem of semilocal classification of saddle singularities of integrable Hamiltonian systems. Namely, a certain combinatorial object (an f{sub n}-graph) is associated with every nondegenerate saddle singularity of rank zero; as a result, the problem of semilocal classification of saddle singularities of rank zero is reduced to the problem of enumeration of the f{sub n}-graphs. This enables us to describe a simple algorithm for obtaining the lists of saddle singularities of rank zero for a given number of degrees of freedom and a given complexity. Bibliography: 24 titles.

  8. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  9. Method and apparatus for determining viscosity

    DOEpatents

    Chu, Benjamin; Dhadwal, Harbans S.

    1990-01-01

    A capillary viscometer is provided which includes a fiber-optic probe and a phototransistor which produces an output signal as a liquid meniscus falls through the field of view of a detecting fiber bundle. An analog circuit is employed for receiving the signal and starting or stopping a digital counter in response thereto. The circuit includes first and second differentiators and a zero detection portion for detecting zero value outputs from the second differentiator. The counter is started or stopped upon the generation of a triggering pulse at the time such zero value is detected.

  10. Automated measurement of stent strut coverage in intravascular optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Ahn, Chi Young; Kim, Byeong-Keuk; Hong, Myeong-Ki; Jang, Yangsoo; Heo, Jung; Joo, Chulmin; Seo, Jin Keun

    2015-02-01

    Optical coherence tomography (OCT) is a non-invasive, cross-sectional imaging modality that has become a prominent imaging method in percutaneous intracoronary intervention. We present an automated detection algorithm for stent strut coordinates and coverage in OCT images. The algorithm for stent strut detection is composed of a coordinate transformation from the polar to the Cartesian domains and application of second derivative operators in the radial and the circumferential directions. Local region-based active contouring was employed to detect lumen boundaries. We applied the method to the OCT pullback images acquired from human patients in vivo to quantitatively measure stent strut coverage. The validation studies against manual expert assessments demonstrated high Pearson's coefficients ( R = 0.99) in terms of the stent strut coordinates, with no significant bias. An averaged Hausdorff distance of < 120 μm was obtained for vessel border detection. Quantitative comparison in stent strut to vessel wall distance found a bias of < 12.3 μm and a 95% confidence of < 110 μm.

  11. Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network.

    PubMed

    Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin

    2017-06-14

    The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.

  12. begin{center} MUSIC Algorithms for Rebar Detection

    NASA Astrophysics Data System (ADS)

    Leone, G.; Solimene, R.

    2012-04-01

    In this contribution we consider the problem of detecting and localizing small cross section, with respect to the wavelength, scatterers from their scattered field once a known incident field interrogated the scene where they reside. A pertinent applicative context is rebar detection within concrete pillar. For such a case, scatterers to be detected are represented by rebars themselves or by voids due to their lacking. In both cases, as scatterers have point-like support, a subspace projection method can be conveniently exploited [1]. However, as the field scattered by rebars is stronger than the one due to voids, it is expected that the latter can be difficult to be detected. In order to circumvent this problem, in this contribution we adopt a two-step MUltiple SIgnal Classification (MUSIC) detection algorithm. In particular, the first stage aims at detecting rebars. Once rebar are detected, their positions are exploited to update the Green's function and then a further detection scheme is run to locate voids. However, in this second case, background medium encompasses also the rabars. The analysis is conducted numerically for a simplified two-dimensional scalar scattering geometry. More in detail, as is usual in MUSIC algorithm, a multi-view/multi-static single-frequency configuration is considered [2]. Baratonia, G. Leone, R. Pierri, R. Solimene, "Fault Detection in Grid Scattering by a Time-Reversal MUSIC Approach," Porc. Of ICEAA 2011, Turin, 2011. E. A. Marengo, F. K. Gruber, "Subspace-Based Localization and Inverse Scattering of Multiply Scattering Point Targets," EURASIP Journal on Advances in Signal Processing, 2007, Article ID 17342, 16 pages (2007).

  13. Crossing statistics of laser light scattered through a nanofluid.

    PubMed

    Arshadi Pirlar, M; Movahed, S M S; Razzaghi, D; Karimzadeh, R

    2017-09-01

    In this paper, we investigate the crossing statistics of speckle patterns formed in the Fresnel diffraction region by a laser beam scattering through a nanofluid. We extend zero-crossing statistics to assess the dynamical properties of the nanofluid. According to the joint probability density function of laser beam fluctuation and its time derivative, the theoretical frameworks for Gaussian and non-Gaussian regimes are revisited. We count the number of crossings not only at zero level but also for all available thresholds to determine the average speed of moving particles. Using a probabilistic framework in determining crossing statistics, a priori Gaussianity is not essentially considered; therefore, even in the presence of deviation from Gaussian fluctuation, this modified approach is capable of computing relevant quantities, such as mean value of speed, more precisely. Generalized total crossing, which represents the weighted summation of crossings for all thresholds to quantify small deviation from Gaussian statistics, is introduced. This criterion can also manipulate the contribution of noises and trends to infer reliable physical quantities. The characteristic time scale for having successive crossings at a given threshold is defined. In our experimental setup, we find that increasing sample temperature leads to more consistency between Gaussian and perturbative non-Gaussian predictions. The maximum number of crossings does not necessarily occur at mean level, indicating that we should take into account other levels in addition to zero level to achieve more accurate assessments.

  14. Real Diffusion-Weighted MRI Enabling True Signal Averaging and Increased Diffusion Contrast

    PubMed Central

    Eichner, Cornelius; Cauley, Stephen F; Cohen-Adad, Julien; Möller, Harald E; Turner, Robert; Setsompop, Kawin; Wald, Lawrence L

    2015-01-01

    This project aims to characterize the impact of underlying noise distributions on diffusion-weighted imaging. The noise floor is a well-known problem for traditional magnitude-based diffusion-weighted MRI (dMRI) data, leading to biased diffusion model fits and inaccurate signal averaging. Here, we introduce a total-variation-based algorithm to eliminate shot-to-shot phase variations of complex-valued diffusion data with the intention to extract real-valued dMRI datasets. The obtained real-valued diffusion data are no longer superimposed by a noise floor but instead by a zero-mean Gaussian noise distribution, yielding dMRI data without signal bias. We acquired high-resolution dMRI data with strong diffusion weighting and, thus, low signal-to-noise ratio. Both the extracted real-valued and traditional magnitude data were compared regarding signal averaging, diffusion model fitting and accuracy in resolving crossing fibers. Our results clearly indicate that real-valued diffusion data enables idealized conditions for signal averaging. Furthermore, the proposed method enables unbiased use of widely employed linear least squares estimators for model fitting and demonstrates an increased sensitivity to detect secondary fiber directions with reduced angular error. The use of phase-corrected, real-valued data for dMRI will therefore help to clear the way for more detailed and accurate studies of white matter microstructure and structural connectivity on a fine scale. PMID:26241680

  15. A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm

    DOE PAGES

    Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...

    2016-02-17

    We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.

  16. Reconstructing Buildings with Discontinuities and Roof Overhangs from Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.

    2017-05-01

    This paper proposes a two-stage method for the reconstruction of city buildings with discontinuities and roof overhangs from oriented nadir and oblique aerial images. To model the structures the input data is transformed into a dense point cloud, segmented and filtered with a modified marching cubes algorithm to reduce the positional noise. Assuming a monolithic building the remaining vertices are initially projected onto a 2D grid and passed to RANSAC-based regression and topology analysis to geometrically determine finite wall, ground and roof planes. If this should fail due to the presence of discontinuities the regression will be repeated on a 3D level by traversing voxels within the regularly subdivided bounding box of the building point set. For each cube a planar piece of the current surface is approximated and expanded. The resulting segments get mutually intersected yielding both topological and geometrical nodes and edges. These entities will be eliminated if their distance-based affiliation to the defining point sets is violated leaving a consistent building hull including its structural breaks. To add the roof overhangs the computed polygonal meshes are projected onto the digital surface model derived from the point cloud. Their shapes are offset equally along the edge normals with subpixel accuracy by detecting the zero-crossings of the second-order directional derivative in the gradient direction of the height bitmap and translated back into world space to become a component of the building. As soon as the reconstructed objects are finished the aerial images are further used to generate a compact texture atlas for visualization purposes. An optimized atlas bitmap is generated that allows perspectivecorrect multi-source texture mapping without prior rectification involving a partially parallel placement algorithm. Moreover, the texture atlases undergo object-based image analysis (OBIA) to detect window areas which get reintegrated into the building models. To evaluate the performance of the proposed method a proof-of-concept test on sample structures obtained from real-world data of Heligoland/Germany has been conducted. It revealed good reconstruction accuracy in comparison to the cadastral map, a speed-up in texture atlas optimization and visually attractive render results.

  17. Behaviour of fractional loop delay zero crossing digital phase locked loop (FR-ZCDPLL)

    NASA Astrophysics Data System (ADS)

    Nasir, Qassim

    2018-01-01

    This article analyses the performance of the first-order zero crossing digital phase locked loops (FR-ZCDPLL) when fractional loop delay is added to loop. The non-linear dynamics of the loop is presented, analysed and examined through bifurcation behaviour. Numerical simulation of the loop is conducted to proof the mathematical analysis of the loop operation. The results of the loop simulation show that the proposed FR-ZCDPLL has enhanced the performance compared to the conventional zero crossing DPLL in terms of wider lock range, captured range and stable operation region. In addition, extensive experimental simulation was conducted to find the optimum loop parameters for different loop environmental conditions. The addition of the fractional loop delay network in the conventional loop also reduces the phase jitter and its variance especially when the signal-to-noise ratio is low.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, Joshua Daniel; Carr, Christina; Pettit, Erin C.

    We apply a fully autonomous icequake detection methodology to a single day of high-sample rate (200 Hz) seismic network data recorded from the terminus of Taylor Glacier, ANT that temporally coincided with a brine release episode near Blood Falls (May 13, 2014). We demonstrate a statistically validated procedure to assemble waveforms triggered by icequakes into populations of clusters linked by intra-event waveform similarity. Our processing methodology implements a noise-adaptive power detector coupled with a complete-linkage clustering algorithm and noise-adaptive correlation detector. This detector-chain reveals a population of 20 multiplet sequences that includes ~150 icequakes and produces zero false alarms onmore » the concurrent, diurnally variable noise. Our results are very promising for identifying changes in background seismicity associated with the presence or absence of brine release episodes. We thereby suggest that our methodology could be applied to longer time periods to establish a brine-release monitoring program for Blood Falls that is based on icequake detections.« less

  19. Measurement of the Total Cross Section of Uranium-Uranium Collisions at √{sNN} = 192 . 8 GeV

    NASA Astrophysics Data System (ADS)

    Baltz, A. J.; Fischer, W.; Blaskiewicz, M.; Gassner, D.; Drees, K. A.; Luo, Y.; Minty, M.; Thieberger, P.; Wilinski, M.; Pshenichnov, I. A.

    2014-03-01

    The total cross section of Uranium-Uranium at √{sNN} = 192 . 8 GeV has been measured to be 515 +/-13stat +/-22sys barn, which agrees with the calculated theoretical value of 487.3 barn within experimental error. That this total cross section is more than an order of magnitude larger than the geometric ion-ion cross section is primarily due to Bound-Free Pair Production (BFPP) and Electro-Magnetic Dissociation (EMD). Nearly all beam losses were due to geometric, BFPP and EMD collisions. This allowed the determination of the total cross section from the measured beam loss rates and luminosity. The beam loss rate is calculated from a time-dependent measurement of the total beam intensity. The luminosity is measured via the detection of neutron pairs in time-coincidence in the Zero Degree Calorimeters. Apart from a general interest in verifying the calculations experimentally, an accurate prediction of the losses created in the heavy ion collisions is of practical interest for the LHC, where collision products have the potential to quench cryogenically cooled magnets.

  20. Model Selection for the Multiple Model Adaptive Algorithm for In-Flight Simulation.

    DTIC Science & Technology

    1987-12-01

    of the two models, while the other model was given a probability of approximately zero. If the probabilties were exactly one and zero for the...Figures 6-103 through 6-107. From Figure 6-103, it can be seen that the probabilty of the model associated with the 10,000 ft, 0.35 Mach flight con

  1. An active acoustic tripwire for simultaneous detection and localization of multiple underwater intruders.

    PubMed

    Folegot, Thomas; Martinelli, Giovanna; Guerrini, Piero; Stevenson, J Mark

    2008-11-01

    An algorithm allowing simultaneous detection and localization of multiple submerged targets crossing an acoustic tripwire based on forward scattering is described and then evaluated based upon data collected at sea. This paper quantifies the agreement between the theoretical performance and the results obtained from processing data gathered at sea for crossings at several depths and ranges. Targets crossing the acoustic field produce shadows on each side of the barrier, for specific sensors and for specific acoustic paths. In post-processing, a model is invoked to associate expected paths with the observed shadows. This process allows triangulation of the target's position inside the acoustic field. Precise localization is achieved by taking advantage of the multipath propagation structure of the received signal, together with the diversity of the source and receiver locations. Environmental robustness is demonstrated using simulations and can be explained by the use of an array of sources spatially distributed through the water column.

  2. Computer Algorithms for Measurement Control and Signal Processing of Transient Scattering Signatures

    DTIC Science & Technology

    1988-09-01

    CURVE * C Y2 IS THE BACKGROUND CURVE * C NSHIF IS THE NUMBER OF POINT TO SHIFT * C SET IS THE SUM OF THE POINT TO SHIFT * C IN ORDER TO ZERO PADDING ...reduces the spec- tral content in both the low and high frequency regimes. If the threshold is set to zero , a "naive’ deconvolution results. This provides...side of equation 5.2 was close to zero , so it can be neglected. As a result, the expected power is equal to the variance. The signal plus noise power

  3. Zero cylinder coordinate system approach to image reconstruction in fan beam ICT

    NASA Astrophysics Data System (ADS)

    Yan, Yan-Chun; Xian, Wu; Hall, Ernest L.

    1992-11-01

    The state-of-the-art of the transform algorithms has allowed the newest versions to produce excellent and efficient reconstructed images in most applications, especially in medical CT and industrial CT etc. Based on the Zero Cylinder Coordinate system (ZCC) presented in this paper, a new transform algorithm of image reconstruction in fan beam industrial CT is suggested. It greatly reduces the amount of computation of the backprojection, which requires only two INC instructions to calculate the weighted factor and the subcoordinate. A new backprojector is designed, which simplifies its assembly-line mechanism based on the ZCC method. Finally, a simulation results on microcomputer is given out, which proves this method is effective and practical.

  4. A technique for pole-zero placement for dual-input control systems. [computer simulation of CH-47 helicopter longitudinal dynamics

    NASA Technical Reports Server (NTRS)

    Reid, G. F.

    1976-01-01

    A technique is presented for determining state variable feedback gains that will place both the poles and zeros of a selected transfer function of a dual-input control system at pre-determined locations in the s-plane. Leverrier's algorithm is used to determine the numerator and denominator coefficients of the closed-loop transfer function as functions of the feedback gains. The values of gain that match these coefficients to those of a pre-selected model are found by solving two systems of linear simultaneous equations. The algorithm has been used in a computer simulation of the CH-47 helicopter to control longitudinal dynamics.

  5. Stability Analysis for Rotating Stall Dynamics in Axial Flow Compressors

    DTIC Science & Technology

    1999-01-01

    modes determines collectively local stability of the compressor model. Explicit conditions are obtained for local stability of rotating stall which...critical modes determines the stability for rotating stall collectively . We point out that although in a special case our stability condition for...strict crossing assumption implies that the zero solution changes its stability as ~, crosses ~’c. For instance, odk (yc ) > 0 implies that the zero

  6. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    PubMed

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  7. Creating normograms of dural sinuses in healthy persons using computer-assisted detection for analysis and comparison of cross-section dural sinuses in the brain.

    PubMed

    Anconina, Reut; Zur, Dinah; Kesler, Anat; Lublinsky, Svetlana; Toledano, Ronen; Novack, Victor; Benkobich, Elya; Novoa, Rosa; Novic, Evelyne Farkash; Shelef, Ilan

    2017-06-01

    Dural sinuses vary in size and shape in many pathological conditions with abnormal intracranial pressure. Size and shape normograms of dural brain sinuses are not available. The creation of such normograms may enable computer-assisted comparison to pathologic exams and facilitate diagnoses. The purpose of this study was to quantitatively evaluate normal magnetic resonance venography (MRV) studies in order to create normograms of dural sinuses using a computerized algorithm for vessel cross-sectional analysis. This was a retrospective analysis of MRV studies of 30 healthy persons. Data were analyzed using a specially developed Matlab algorithm for vessel cross-sectional analysis. The cross-sectional area and shape measurements were evaluated to create normograms. Mean cross-sectional size was 53.27±13.31 for the right transverse sinus (TS), 46.87+12.57 for the left TS (p=0.089) and 36.65+12.38 for the superior sagittal sinus. Normograms were created. The distribution of cross-sectional areas along the vessels showed distinct patterns and a parallel course for the median, 25th, 50th and 75th percentiles. In conclusion, using a novel computerized method for vessel cross-sectional analysis we were able to quantitatively characterize dural sinuses of healthy persons and create normograms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Above-threshold scattering about a Feshbach resonance for ultracold atoms in an optical collider.

    PubMed

    Horvath, Milena S J; Thomas, Ryan; Tiesinga, Eite; Deb, Amita B; Kjærgaard, Niels

    2017-09-06

    Ultracold atomic gases have realized numerous paradigms of condensed matter physics, where control over interactions has crucially been afforded by tunable Feshbach resonances. So far, the characterization of these Feshbach resonances has almost exclusively relied on experiments in the threshold regime near zero energy. Here, we use a laser-based collider to probe a narrow magnetic Feshbach resonance of rubidium above threshold. By measuring the overall atomic loss from colliding clouds as a function of magnetic field, we track the energy-dependent resonance position. At higher energy, our collider scheme broadens the loss feature, making the identification of the narrow resonance challenging. However, we observe that the collisions give rise to shifts in the center-of-mass positions of outgoing clouds. The shifts cross zero at the resonance and this allows us to accurately determine its location well above threshold. Our inferred resonance positions are in excellent agreement with theory.Studies on energy-dependent scattering of ultracold atoms were previously carried out near zero collision energies. Here, the authors observe a magnetic Feshbach resonance in ultracold Rb collisions for above-threshold energies and their method can also be used to detect higher partial wave resonances.

  9. Safe trajectory estimation at a pedestrian crossing to assist visually impaired people.

    PubMed

    Alghamdi, Saleh; van Schyndel, Ron; Khalil, Ibrahim

    2012-01-01

    The aim of this paper is to present a service for blind and people with low vision to assist them to cross the street independently. The presented approach provides the user with significant information such as detection of pedestrian crossing signal from any point of view, when the pedestrian crossing signal light is green, the detection of dynamic and fixed obstacles, predictions of the movement of fellow pedestrians and information on objects which may intersect his path. Our approach is based on capturing multiple frames using a depth camera which is attached to a user's headgear. Currently a testbed system is built on a helmet and is connected to a laptop in the user's backpack. In this paper, we discussed efficiency of using Speeded-Up Robust Features (SURF) algorithm for object recognition for purposes of blind people assistance. The system predicts the movement of objects of interest to provide the user with information on the safest path to navigate and information on the surrounding area. Evaluation of this approach on real sequence video frames provides 90% of human detection and more than 80% for recognition of other related objects.

  10. Cross Matching of VIIRS Boat Detection and Vessel Monitoring System Tracks

    NASA Astrophysics Data System (ADS)

    Hsu, F. C.; Elvidge, C.; Zhizhin, M. N.; Baugh, K.; Ghosh, T.

    2016-12-01

    One approach to commercial fishing is to use use bright lights at night to attract catch. This is a widely used practice in East and Southeast Asia, but can also be found in other fisheries. In some cases, the deployed lighting exceeds 100,000 watts. Such lighting is distinctive in dark ocean and can even be seen from space with sensor such as Visible Infrared Imaging Radiometer Suite Day/Night Band (VIIRS-DNB). We have developed a VIIRS Boat Detection (VBD) system, which outputs lists of boat locations in near real time. One of the standard methods fishery agencies use to collect geospatial data on fishing boats is to require boats to carry Vessel Monitoring System beacons. We developed an algorithm to cross-match VBD data with VMS tracks. With this we are able to identify fishing boats that do not carry VMS beacons. In certain situations, this is an indicator of illegal fishing. The other application for this cross-matching is to define the VIIRS detection limits and developing a calibration to estimate deployed wattage. Here we demonstrate results of cross matching VBD and VMS for Indonesia as example to showcase its potential.

  11. DEsingle for detecting three types of differential expression in single-cell RNA-seq data.

    PubMed

    Miao, Zhun; Deng, Ke; Wang, Xiaowo; Zhang, Xuegong

    2018-04-24

    The excessive amount of zeros in single-cell RNA-seq data include "real" zeros due to the on-off nature of gene transcription in single cells and "dropout" zeros due to technical reasons. Existing differential expression (DE) analysis methods cannot distinguish these two types of zeros. We developed an R package DEsingle which employed Zero-Inflated Negative Binomial model to estimate the proportion of real and dropout zeros and to define and detect 3 types of DE genes in single-cell RNA-seq data with higher accuracy. The R package DEsingle is freely available at https://github.com/miaozhun/DEsingle and is under Bioconductor's consideration now. zhangxg@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online.

  12. PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budavári, Tamás; Basu, Amitabh, E-mail: budavari@jhu.edu, E-mail: basu.amitabh@jhu.edu

    2016-10-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  13. Probabilistic Cross-identification in Crowded Fields as an Assignment Problem

    NASA Astrophysics Data System (ADS)

    Budavári, Tamás; Basu, Amitabh

    2016-10-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  14. 77 FR 64571 - Self-Regulatory Organizations; National Stock Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-22

    ... Change To Amend its Rules To Clarify the Handling of Zero Displayed Reserve Orders During Crossed Markets...) clarify that the Exchange's trading system (the ``System'' \\3\\) will not execute a Zero Display Reserve.... Purpose The Exchange is proposing to amend its rules to clarify that the System will not execute Zero...

  15. Scheduling Algorithms for Maximizing Throughput with Zero-Forcing Beamforming in a MIMO Wireless System

    NASA Astrophysics Data System (ADS)

    Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi

    Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.

  16. Sparse matrix methods based on orthogonality and conjugacy

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1973-01-01

    A matrix having a high percentage of zero elements is called spares. In the solution of systems of linear equations or linear least squares problems involving large sparse matrices, significant saving of computer cost can be achieved by taking advantage of the sparsity. The conjugate gradient algorithm and a set of related algorithms are described.

  17. A Hybrid Digital-Signature and Zero-Watermarking Approach for Authentication and Protection of Sensitive Electronic Documents

    PubMed Central

    Kabir, Muhammad N.; Alginahi, Yasser M.

    2014-01-01

    This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints. PMID:25254247

  18. Investigation of Zero Knowledge Proof Approaches Based on Graph Theory

    DTIC Science & Technology

    2011-02-01

    that appears frequently in the literature is a metaheuristic algorithm called the Pilot Method. The Pilot Method improves upon another heuristic...Annual ACM-SIAM Symposium on Discrete Algorithms . Miami: ACM, 2006. 1-10. Voß, S., and C. Duin. "Look Ahead Features in Metaheuristics ." MIC2003...The Fifth Metaheuristics International Conference, 2003: 79-1 - 79-7. Woeginger, G.J. "Exact Algorithms for NP-Hard Problems: A Survey." Lecture

  19. A Methodology for Projecting U.S.-Flag Commercial Tanker Capacity

    DTIC Science & Technology

    1986-03-01

    total crude supply for the total US is less than the sum of the total crude supplies of the PADDs . The algorithm generating the output shown in tables...other PADDs . Accordingly, projected receipts for PADD V are zero , and in conjunction with the values for the vari- ables that previously were...SHIPMENTS ALGORITHM This section presents the mathematics of the algorithm that generates the shipments projections for each PADD . The notation

  20. A Cross Structured Light Sensor and Stripe Segmentation Method for Visual Tracking of a Wall Climbing Robot

    PubMed Central

    Zhang, Liguo; Sun, Jianguo; Yin, Guisheng; Zhao, Jing; Han, Qilong

    2015-01-01

    In non-destructive testing (NDT) of metal welds, weld line tracking is usually performed outdoors, where the structured light sources are always disturbed by various noises, such as sunlight, shadows, and reflections from the weld line surface. In this paper, we design a cross structured light (CSL) to detect the weld line and propose a robust laser stripe segmentation algorithm to overcome the noises in structured light images. An adaptive monochromatic space is applied to preprocess the image with ambient noises. In the monochromatic image, the laser stripe obtained is recovered as a multichannel signal by minimum entropy deconvolution. Lastly, the stripe centre points are extracted from the image. In experiments, the CSL sensor and the proposed algorithm are applied to guide a wall climbing robot inspecting the weld line of a wind power tower. The experimental results show that the CSL sensor can capture the 3D information of the welds with high accuracy, and the proposed algorithm contributes to the weld line inspection and the robot navigation. PMID:26110403

  1. Correlation coefficient based supervised locally linear embedding for pulmonary nodule recognition.

    PubMed

    Wu, Panpan; Xia, Kewen; Yu, Hengyong

    2016-11-01

    Dimensionality reduction techniques are developed to suppress the negative effects of high dimensional feature space of lung CT images on classification performance in computer aided detection (CAD) systems for pulmonary nodule detection. An improved supervised locally linear embedding (SLLE) algorithm is proposed based on the concept of correlation coefficient. The Spearman's rank correlation coefficient is introduced to adjust the distance metric in the SLLE algorithm to ensure that more suitable neighborhood points could be identified, and thus to enhance the discriminating power of embedded data. The proposed Spearman's rank correlation coefficient based SLLE (SC(2)SLLE) is implemented and validated in our pilot CAD system using a clinical dataset collected from the publicly available lung image database consortium and image database resource initiative (LICD-IDRI). Particularly, a representative CAD system for solitary pulmonary nodule detection is designed and implemented. After a sequential medical image processing steps, 64 nodules and 140 non-nodules are extracted, and 34 representative features are calculated. The SC(2)SLLE, as well as SLLE and LLE algorithm, are applied to reduce the dimensionality. Several quantitative measurements are also used to evaluate and compare the performances. Using a 5-fold cross-validation methodology, the proposed algorithm achieves 87.65% accuracy, 79.23% sensitivity, 91.43% specificity, and 8.57% false positive rate, on average. Experimental results indicate that the proposed algorithm outperforms the original locally linear embedding and SLLE coupled with the support vector machine (SVM) classifier. Based on the preliminary results from a limited number of nodules in our dataset, this study demonstrates the great potential to improve the performance of a CAD system for nodule detection using the proposed SC(2)SLLE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Signals of Opportunity Navigation Using Wi-Fi Signals

    DTIC Science & Technology

    2011-03-24

    Identifier . . . . . . . . . . . . . . . . . . . . . . . 54 MVM Mean Value Method . . . . . . . . . . . . . . . . . . . . . 60 SDM Scaled Differential...the mean value ( MVM ) and scaled differential (SDM) methods. An error was logged if the UI 60 correlation algorithm identified a packet index that did...Notable from this graph is that a window of 50 packets appears to provide zero errors for MVM and near zero errors for SDM. Also notable is that a

  3. Polynomial time blackbox identity testers for depth-3 circuits : the field doesn't matter.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Saxena, Nitin

    Let C be a depth-3 circuit with n variables, degree d and top fanin k (called {Sigma}{Pi}{Sigma}(k, d, n) circuits) over base field F. It is a major open problem to design a deterministic polynomial time blackbox algorithm that tests if C is identically zero. Klivans & Spielman (STOC 2001) observed that the problem is open even when k is a constant. This case has been subjected to a serious study over the past few years, starting from the work of Dvir & Shpilka (STOC 2005). We give the first polynomial time blackbox algorithm for this problem. Our algorithm runsmore » in time poly(n)d{sup k}, regardless of the base field. The only field for which polynomial time algorithms were previously known is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS 2010). This is the first blackbox algorithm for depth-3 circuits that does not use the rank based approaches of Karnin & Shpilka (CCC 2008). We prove an important tool for the study of depth-3 identities. We design a blackbox polynomial time transformation that reduces the number of variables in a {Sigma}{Pi}{Sigma}(k, d, n) circuit to k variables, but preserves the identity structure. Polynomial identity testing (PIT) is a major open problem in theoretical computer science. The input is an arithmetic circuit that computes a polynomial p(x{sub 1}, x{sub 2},..., x{sub n}) over a base field F. We wish to check if p is the zero polynomial, or in other words, is identically zero. We may be provided with an explicit circuit, or may only have blackbox access. In the latter case, we can only evaluate the polynomial p at various domain points. The main goal is to devise a deterministic blackbox polynomial time algorithm for PIT.« less

  4. Zero-crossing approach to high-resolution reconstruction in frequency-domain optical-coherence tomography.

    PubMed

    Krishnan, Sunder Ram; Seelamantula, Chandra Sekhar; Bouwens, Arno; Leutenegger, Marcel; Lasser, Theo

    2012-10-01

    We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction.

  5. Video segmentation for post-production

    NASA Astrophysics Data System (ADS)

    Wills, Ciaran

    2001-12-01

    Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.

  6. Modified algorithm for mineral identification in LWIR hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Yousefi, Bardia; Sojasi, Saeed; Liaigre, Kévin; Ibarra Castanedo, Clemente; Beaudoin, Georges; Huot, François; Maldague, Xavier P. V.; Chamberland, Martin

    2017-05-01

    The applications of hyperspectral infrared imagery in the different fields of research are significant and growing. It is mainly used in remote sensing for target detection, vegetation detection, urban area categorization, astronomy and geological applications. The geological applications of this technology mainly consist in mineral identification using in airborne or satellite imagery. We address a quantitative and qualitative assessment of mineral identification in the laboratory conditions. We strive to identify nine different mineral grains (Biotite, Diopside, Epidote, Goethite, Kyanite, Scheelite, Smithsonite, Tourmaline, Quartz). A hyperspectral camera in the Long Wave Infrared (LWIR, 7.7-11.8 ) with a LW-macro lens providing a spatial resolution of 100 μm, an infragold plate, and a heating source are the instruments used in the experiment. The proposed algorithm clusters all the pixel-spectra in different categories. Then the best representatives of each cluster are chosen and compared with the ASTER spectral library of JPL/NASA through spectral comparison techniques, such as Spectral angle mapper (SAM) and Normalized Cross Correlation (NCC). The results of the algorithm indicate significant computational efficiency (more than 20 times faster) as compared to previous algorithms and have shown a promising performance for mineral identification.

  7. Production cross sections for Lee-Wick massive electromagnetic bosons and for spin-zero and spin-one W bosons at high energies.

    NASA Technical Reports Server (NTRS)

    Linsker, R.

    1972-01-01

    Production cross sections for three types of hypothetical particles are calculated in the presented paper. Several (Z, Z') cases were studied corresponding to elastic scattering off protons and neutrons (either free or embedded within a Fermi sea), coherent scattering off a nucleus, and inelastic scattering off a proton (in which case Z' denotes a nucleon resonance or hadronic system in the continuum). Detailed structure-function data are used to improve the accuracy of the inelastic scattering calculation. Results of calculations are given for beam energies between 50 and 10,000 GeV, and masses between 5 and 40 GeV for the massive Lee-Wick spin-1 boson. Cross sections were computed for resonant and semiweak processes. The production cross section of spin-zero weak intermediate bosons was found to be at least one order of magnitude smaller than for spin-1 weak bosons in nearly all regions of interest. The production cross section of spin-zero weak intermediate bosons for inelastic scattering off protons compares with that for elastic scattering in the regions of interest. In the case of massive spin-1 bosons and spin-1 weak intermediates, the main contribution to total production cross section off protons is elastic.

  8. Informationally Efficient Multi-User Communication

    DTIC Science & Technology

    2010-01-01

    DSM algorithms, the Op- timal Spectrum Balancing ( OSB ) algorithm and the Iterative Spectrum Balanc- ing (ISB) algorithm, were proposed to solve the...problem of maximization of a weighted rate-sum across all users [CYM06, YL06]. OSB has an exponential complexity in the number of users. ISB only has a...the duality gap min λ1,λ2 D (λ1, λ2) − max P1,P2 f (P1,P2) is not zero. Fig. 3.3 summarizes the three key steps of a dual method, the OSB algorithm

  9. Normalized gradient fields cross-correlation for automated detection of prostate in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Fotin, Sergei V.; Yin, Yin; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter L.

    2012-02-01

    Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment: it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and evaluated. The components of the method, offline template learning and the localization algorithm, are described in detail. The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were 4.06 +/- 0.33 mm and 3.10 +/- 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results demonstrate high utility of the detection method for a fully automated prostate segmentation.

  10. The Cross-Correlation and Reshuffling Tests in Discerning Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Schultz, Ryan; Telesca, Luciano

    2018-05-01

    In recent years, cases of newly emergent induced clusters have increased seismic hazard and risk in locations with social, environmental, and economic consequence. Thus, the need for a quantitative and robust means to discern induced seismicity has become a critical concern. This paper reviews a Matlab-based algorithm designed to quantify the statistical confidence between two time-series datasets. Similar to prior approaches, our method utilizes the cross-correlation to delineate the strength and lag of correlated signals. In addition, use of surrogate reshuffling tests allows for the dynamic testing against statistical confidence intervals of anticipated spurious correlations. We demonstrate the robust nature of our algorithm in a suite of synthetic tests to determine the limits of accurate signal detection in the presence of noise and sub-sampling. Overall, this routine has considerable merit in terms of delineating the strength of correlated signals, one of which includes the discernment of induced seismicity from natural.

  11. Towards cross-lingual alerting for bursty epidemic events.

    PubMed

    Collier, Nigel

    2011-10-06

    Online news reports are increasingly becoming a source for event-based early warning systems that detect natural disasters. Harnessing the massive volume of information available from multilingual newswire presents as many challanges as opportunities due to the patterns of reporting complex spatio-temporal events. In this article we study the problem of utilising correlated event reports across languages. We track the evolution of 16 disease outbreaks using 5 temporal aberration detection algorithms on text-mined events classified according to disease and outbreak country. Using ProMED reports as a silver standard, comparative analysis of news data for 13 languages over a 129 day trial period showed improved sensitivity, F1 and timeliness across most models using cross-lingual events. We report a detailed case study analysis for Cholera in Angola 2010 which highlights the challenges faced in correlating news events with the silver standard. The results show that automated health surveillance using multilingual text mining has the potential to turn low value news into high value alerts if informed choices are used to govern the selection of models and data sources. An implementation of the C2 alerting algorithm using multilingual news is available at the BioCaster portal http://born.nii.ac.jp/?page=globalroundup.

  12. Refining the detection of the zero crossing for the three-gluon vertex in symmetric and asymmetric momentum subtraction schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boucaud, Ph.; De Soto, F.; Rodriguez-Quintero, J.

    This article reports on the detailed study of the three-gluon vertex in four-dimensional $SU(3)$ Yang-Mills theory employing lattice simulations with large physical volumes and high statistics. A meticulous scrutiny of the so-called symmetric and asymmetric kinematical configurations is performed and it is shown that the associated form-factor changes sign at a given range of momenta. Here, the lattice results are compared to the model independent predictions of Schwinger-Dyson equations and a very good agreement among the two is found.

  13. Refining the detection of the zero crossing for the three-gluon vertex in symmetric and asymmetric momentum subtraction schemes

    DOE PAGES

    Boucaud, Ph.; De Soto, F.; Rodriguez-Quintero, J.; ...

    2017-06-14

    This article reports on the detailed study of the three-gluon vertex in four-dimensional $SU(3)$ Yang-Mills theory employing lattice simulations with large physical volumes and high statistics. A meticulous scrutiny of the so-called symmetric and asymmetric kinematical configurations is performed and it is shown that the associated form-factor changes sign at a given range of momenta. Here, the lattice results are compared to the model independent predictions of Schwinger-Dyson equations and a very good agreement among the two is found.

  14. Rapid earthquake detection through GPU-Based template matching

    NASA Astrophysics Data System (ADS)

    Mu, Dawei; Lee, En-Jui; Chen, Po

    2017-12-01

    The template-matching algorithm (TMA) has been widely adopted for improving the reliability of earthquake detection. The TMA is based on calculating the normalized cross-correlation coefficient (NCC) between a collection of selected template waveforms and the continuous waveform recordings of seismic instruments. In realistic applications, the computational cost of the TMA is much higher than that of traditional techniques. In this study, we provide an analysis of the TMA and show how the GPU architecture provides an almost ideal environment for accelerating the TMA and NCC-based pattern recognition algorithms in general. So far, our best-performing GPU code has achieved a speedup factor of more than 800 with respect to a common sequential CPU code. We demonstrate the performance of our GPU code using seismic waveform recordings from the ML 6.6 Meinong earthquake sequence in Taiwan.

  15. Contrast Invariant Interest Point Detection by Zero-Norm LoG Filter.

    PubMed

    Zhenwei Miao; Xudong Jiang; Kim-Hui Yap

    2016-01-01

    The Laplacian of Gaussian (LoG) filter is widely used in interest point detection. However, low-contrast image structures, though stable and significant, are often submerged by the high-contrast ones in the response image of the LoG filter, and hence are difficult to be detected. To solve this problem, we derive a generalized LoG filter, and propose a zero-norm LoG filter. The response of the zero-norm LoG filter is proportional to the weighted number of bright/dark pixels in a local region, which makes this filter be invariant to the image contrast. Based on the zero-norm LoG filter, we develop an interest point detector to extract local structures from images. Compared with the contrast dependent detectors, such as the popular scale invariant feature transform detector, the proposed detector is robust to illumination changes and abrupt variations of images. Experiments on benchmark databases demonstrate the superior performance of the proposed zero-norm LoG detector in terms of the repeatability and matching score of the detected points as well as the image recognition rate under different conditions.

  16. Evidence of Absence software

    USGS Publications Warehouse

    Dalthorp, Daniel; Huso, Manuela M. P.; Dail, David; Kenyon, Jessica

    2014-01-01

    Evidence of Absence software (EoA) is a user-friendly application used for estimating bird and bat fatalities at wind farms and designing search protocols. The software is particularly useful in addressing whether the number of fatalities has exceeded a given threshold and what search parameters are needed to give assurance that thresholds were not exceeded. The software is applicable even when zero carcasses have been found in searches. Depending on the effectiveness of the searches, such an absence of evidence of mortality may or may not be strong evidence that few fatalities occurred. Under a search protocol in which carcasses are detected with nearly 100 percent certainty, finding zero carcasses would be convincing evidence that overall mortality rate was near zero. By contrast, with a less effective search protocol with low probability of detecting a carcass, finding zero carcasses does not rule out the possibility that large numbers of animals were killed but not detected in the searches. EoA uses information about the search process and scavenging rates to estimate detection probabilities to determine a maximum credible number of fatalities, even when zero or few carcasses are observed.

  17. User-guided automated segmentation of time-series ultrasound images for measuring vasoreactivity of the brachial artery induced by flow mediation

    NASA Astrophysics Data System (ADS)

    Sehgal, Chandra M.; Kao, Yen H.; Cary, Ted W.; Arger, Peter H.; Mohler, Emile R.

    2005-04-01

    Endothelial dysfunction in response to vasoactive stimuli is closely associated with diseases such as atherosclerosis, hypertension and congestive heart failure. The current method of using ultrasound to image the brachial artery along the longitudinal axis is insensitive for measuring the small vasodilatation that occurs in response to flow mediation. The goal of this study is to overcome this limitation by using cross-sectional imaging of the brachial artery in conjunction with the User-Guided Automated Boundary Detection (UGABD) algorithm for extracting arterial boundaries. High-resolution ultrasound imaging was performed on rigid plastic tubing, on elastic rubber tubing phantoms with steady and pulsatile flow, and on the brachial artery of a healthy volunteer undergoing reactive hyperemia. The area of cross section of time-series images was analyzed by UGABD by propagating the boundary from one frame to the next. The UGABD results were compared by linear correlation with those obtained by manual tracing. UGABD measured the cross-sectional area of the phantom tubing to within 5% of the true area. The algorithm correctly detected pulsatile vasomotion in phantoms and in the brachial artery. A comparison of area measurements made using UGABD with those made by manual tracings yielded a correlation of 0.9 and 0.8 for phantoms and arteries, respectively. The peak vasodilatation due to reactive hyperemia was two orders of magnitude greater in pixel count than that measured by longitudinal imaging. Cross-sectional imaging is more sensitive than longitudinal imaging for measuring flow-mediated dilatation of brachial artery, and thus may be more suitable for evaluating endothelial dysfunction.

  18. Primary decomposition of zero-dimensional ideals over finite fields

    NASA Astrophysics Data System (ADS)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  19. Solving differential equations for Feynman integrals by expansions near singular points

    NASA Astrophysics Data System (ADS)

    Lee, Roman N.; Smirnov, Alexander V.; Smirnov, Vladimir A.

    2018-03-01

    We describe a strategy to solve differential equations for Feynman integrals by powers series expansions near singular points and to obtain high precision results for the corresponding master integrals. We consider Feynman integrals with two scales, i.e. non-trivially depending on one variable. The corresponding algorithm is oriented at situations where canonical form of the differential equations is impossible. We provide a computer code constructed with the help of our algorithm for a simple example of four-loop generalized sunset integrals with three equal non-zero masses and two zero masses. Our code gives values of the master integrals at any given point on the real axis with a required accuracy and a given order of expansion in the regularization parameter ɛ.

  20. Intermediary Variables and Algorithm Parameters for an Electronic Algorithm for Intravenous Insulin Infusion

    PubMed Central

    Braithwaite, Susan S.; Godara, Hemant; Song, Julie; Cairns, Bruce A.; Jones, Samuel W.; Umpierrez, Guillermo E.

    2009-01-01

    Background Algorithms for intravenous insulin infusion may assign the infusion rate (IR) by a two-step process. First, the previous insulin infusion rate (IRprevious) and the rate of change of blood glucose (BG) from the previous iteration of the algorithm are used to estimate the maintenance rate (MR) of insulin infusion. Second, the insulin IR for the next iteration (IRnext) is assigned to be commensurate with the MR and the distance of the current blood glucose (BGcurrent) from target. With use of a specific set of algorithm parameter values, a family of iso-MR curves is created, each giving IR as a function of MR and BG. Method To test the feasibility of estimating MR from the IRprevious and the previous rate of change of BG, historical hyperglycemic data points were used to compute the “maintenance rate cross step next estimate” (MRcsne). Historical cases had been treated with intravenous insulin infusion using a tabular protocol that estimated MR according to column-change rules. The mean IR on historical stable intervals (MRtrue), an estimate of the biologic value of MR, was compared to MRcsne during the hyperglycemic iteration immediately preceding the stable interval. Hypothetically calculated MRcsne-dependent IRnext was compared to IRnext assigned historically. An expanded theory of an algorithm is developed mathematically. Practical recommendations for computerization are proposed. Results The MRtrue determined on each of 30 stable intervals and the MRcsne during the immediately preceding hyperglycemic iteration differed, having medians with interquartile ranges 2.7 (1.2–3.7) and 3.2 (1.5–4.6) units/h, respectively. However, these estimates of MR were strongly correlated (R2 = 0.88). During hyperglycemia at 941 time points the IRnext assigned historically and the hypothetically calculated MRcsne-dependent IRnext differed, having medians with interquartile ranges 4.0 (3.0–6.0) and 4.6 (3.0–6.8) units/h, respectively, but these paired values again were correlated (R2 = 0.87). This article describes a programmable algorithm for intravenous insulin infusion. The fundamental equation of the algorithm gives the relationship among IR; the biologic parameter MR; and two variables expressing an instantaneous rate of change of BG, one of which must be zero at any given point in time and the other positive, negative, or zero, namely the rate of change of BG from below target (rate of ascent) and the rate of change of BG from above target (rate of descent). In addition to user-definable parameters, three special algorithm parameters discoverable in nature are described: the maximum rate of the spontaneous ascent of blood glucose during nonhypoglycemia, the glucose per daily dose of insulin exogenously mediated, and the MR at given patient time points. User-assignable parameters will facilitate adaptation to different patient populations. Conclusions An algorithm is described that estimates MR prior to the attainment of euglycemia and computes MR-dependent values for IRnext. Design features address glycemic variability, promote safety with respect to hypoglycemia, and define a method for specifying glycemic targets that are allowed to differ according to patient condition. PMID:20144334

  1. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts of audit-log data through resource-limited nodes and locates routines closer to that data. Performance of the unsupervised algorithms is evaluated against the network intrusions of black hole, flooding, Sybil and other denial-of-service attacks in simulations of published scenarios. Results for scenarios with intentionally malfunctioning sensors show the robustness of the two-stage approach to intrusion anomalies.

  2. A Study of the Zero-Lift Drag-Rise Characteristics of Wing-Body Combinations Near the Speed of Sound

    NASA Technical Reports Server (NTRS)

    Whitcomb, Richard T

    1956-01-01

    Comparisons have been made of the shock phenomena and drag-rise increments for representative wing and central-body combinations with those for bodies of revolution having the same axial developments of cross-sectional areas normal to the airstream. On the basis of these comparisons, it is concluded that near the speed of sound the zero-lift drag rise of a low-aspect-ratio thin-wing and body combination is primarily dependent on the axial development of the cross-sectional areas normal to the airstream. It follows that the drag rise for any such configuration is approximately the same as that for any other with the same development of cross-sectional areas. Investigations have also been made of representative wing-body combinations with the body so indented that the axial developments of cross-sectional areas for the combinations were the same as that for the original body alone. Such indentations greatly reduced or eliminated the zero-lift drag-rise increments associated with the wings near the speed of sound.

  3. Effective channel estimation and efficient symbol detection for multi-input multi-output underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Ling, Jun

    Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes is verified by both computer simulations and experimental results obtained by analyzing the measurements acquired in multiple in-water experiments.

  4. Learning optimal embedded cascades.

    PubMed

    Saberian, Mohammad Javad; Vasconcelos, Nuno

    2012-10-01

    The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.

  5. Design of infrasound-detection system via adaptive LMSTDE algorithm

    NASA Technical Reports Server (NTRS)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  6. Adaptive sequential Bayesian classification using Page's test

    NASA Astrophysics Data System (ADS)

    Lynch, Robert S., Jr.; Willett, Peter K.

    2002-03-01

    In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.

  7. Improvement of Alternative Crop Phenology Detection Algorithms using MODIS NDVI Time Series Data in US Corn Belt Region

    NASA Astrophysics Data System (ADS)

    Lee, J.; Kang, S.; Seo, B.; Lee, K.

    2017-12-01

    Predicting crop phenology is important for understanding of crop development and growth processes and improving the accuracy of crop model. Remote sensing offers a feasible tool for monitoring spatio-temporal patterns of crop phenology in region and continental scales. Various methods have been developed to determine the timing of crop phenological stages using spectral vegetation indices (i.e. NDVI and EVI) derived from satellite data. In our study, it was compared four alternative detection methods to identify crop phenological stages (i.e. the emergence and harvesting date) using high quality NDVI time series data derived from MODIS. Also we investigated factors associated with crop development rate. Temperature and photoperiod are the two main factors which would influence the crop's growth pattern expressed in the VI data. Only the effect of temperature on crop development rate was considered. The temperature response function in the Wang-Engel (WE) model was used, which simulates crop development using nonlinear models with response functions that range from zero to one. It has attempted at the state level over 14 years (2003-2016) in Iowa and Illinois state of USA, where the estimated phenology date by using four methods for both corn and soybean. Weekly crop progress reports produced by the USDA NASS were used to validate phenology detection algorithms effected by temperature. All methods showed substantial uncertainty but the threshold method showed relatively better agreement with the State-level data for soybean phenology.

  8. Generation of dark hollow beam via coherent combination based on adaptive optics.

    PubMed

    Zheng, Yi; Wang, Xiaohua; Shen, Feng; Li, Xinyang

    2010-12-20

    A novel method for generating a dark hollow beam (DHB) is proposed and studied both theoretically and experimentally. A coherent combination technique for laser arrays is implemented based on adaptive optics (AO). A beam arraying structure and an active segmented mirror are designed and described. Piston errors are extracted by a zero-order interference detection system with the help of a custom-made photo-detectors array. An algorithm called the extremum approach is adopted to calculate feedback control signals. A dynamic piston error is imported by LiNbO3 to test the capability of the AO servo. In a closed loop the stable and clear DHB is obtained. The experimental results confirm the feasibility of the concept.

  9. Automated Cross-Sectional Measurement Method of Intracranial Dural Venous Sinuses.

    PubMed

    Lublinsky, S; Friedman, A; Kesler, A; Zur, D; Anconina, R; Shelef, I

    2016-03-01

    MRV is an important blood vessel imaging and diagnostic tool for the evaluation of stenosis, occlusions, or aneurysms. However, an accurate image-processing tool for vessel comparison is unavailable. The purpose of this study was to develop and test an automated technique for vessel cross-sectional analysis. An algorithm for vessel cross-sectional analysis was developed that included 7 main steps: 1) image registration, 2) masking, 3) segmentation, 4) skeletonization, 5) cross-sectional planes, 6) clustering, and 7) cross-sectional analysis. Phantom models were used to validate the technique. The method was also tested on a control subject and a patient with idiopathic intracranial hypertension (4 large sinuses tested: right and left transverse sinuses, superior sagittal sinus, and straight sinus). The cross-sectional area and shape measurements were evaluated before and after lumbar puncture in patients with idiopathic intracranial hypertension. The vessel-analysis algorithm had a high degree of stability with <3% of cross-sections manually corrected. All investigated principal cranial blood sinuses had a significant cross-sectional area increase after lumbar puncture (P ≤ .05). The average triangularity of the transverse sinuses was increased, and the mean circularity of the sinuses was decreased by 6% ± 12% after lumbar puncture. Comparison of phantom and real data showed that all computed errors were <1 voxel unit, which confirmed that the method provided a very accurate solution. In this article, we present a novel automated imaging method for cross-sectional vessels analysis. The method can provide an efficient quantitative detection of abnormalities in the dural sinuses. © 2016 by American Journal of Neuroradiology.

  10. Detection of Pathological Voice Using Cepstrum Vectors: A Deep Learning Approach.

    PubMed

    Fang, Shih-Hau; Tsao, Yu; Hsiao, Min-Jing; Chen, Ji-Ying; Lai, Ying-Hui; Lin, Feng-Chuan; Wang, Chi-Te

    2018-03-19

    Computerized detection of voice disorders has attracted considerable academic and clinical interest in the hope of providing an effective screening method for voice diseases before endoscopic confirmation. This study proposes a deep-learning-based approach to detect pathological voice and examines its performance and utility compared with other automatic classification algorithms. This study retrospectively collected 60 normal voice samples and 402 pathological voice samples of 8 common clinical voice disorders in a voice clinic of a tertiary teaching hospital. We extracted Mel frequency cepstral coefficients from 3-second samples of a sustained vowel. The performances of three machine learning algorithms, namely, deep neural network (DNN), support vector machine, and Gaussian mixture model, were evaluated based on a fivefold cross-validation. Collective cases from the voice disorder database of MEEI (Massachusetts Eye and Ear Infirmary) were used to verify the performance of the classification mechanisms. The experimental results demonstrated that DNN outperforms Gaussian mixture model and support vector machine. Its accuracy in detecting voice pathologies reached 94.26% and 90.52% in male and female subjects, based on three representative Mel frequency cepstral coefficient features. When applied to the MEEI database for validation, the DNN also achieved a higher accuracy (99.32%) than the other two classification algorithms. By stacking several layers of neurons with optimized weights, the proposed DNN algorithm can fully utilize the acoustic features and efficiently differentiate between normal and pathological voice samples. Based on this pilot study, future research may proceed to explore more application of DNN from laboratory and clinical perspectives. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter

    NASA Astrophysics Data System (ADS)

    Ruffio, Jean-Baptiste; Macintosh, Bruce; Wang, Jason J.; Pueyo, Laurent; Nielsen, Eric L.; De Rosa, Robert J.; Czekala, Ian; Marley, Mark S.; Arriaga, Pauline; Bailey, Vanessa P.; Barman, Travis; Bulger, Joanna; Chilcote, Jeffrey; Cotten, Tara; Doyon, Rene; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Gerard, Benjamin L.; Goodsell, Stephen J.; Graham, James R.; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn; Larkin, James E.; Maire, Jérôme; Marchis, Franck; Marois, Christian; Metchev, Stanimir; Millar-Blanchaer, Maxwell A.; Morzinski, Katie M.; Oppenheimer, Rebecca; Palmer, David; Patience, Jennifer; Perrin, Marshall; Poyneer, Lisa; Rajan, Abhijith; Rameau, Julien; Rantakyrö, Fredrik T.; Savransky, Dmitry; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane; Wolff, Schuyler

    2017-06-01

    We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.

  12. Single photon laser altimeter data processing, analysis and experimental validation

    NASA Astrophysics Data System (ADS)

    Vacek, Michael; Peca, Marek; Michalek, Vojtech; Prochazka, Ivan

    2015-10-01

    Spaceborne laser altimeters are common instruments on-board the rendezvous spacecraft. This manuscript deals with the altimeters using a single photon approach, which belongs to the family of time-of-flight range measurements. Moreover, the single photon receiver part of the altimeter may be utilized as an Earth-to-spacecraft link enabling one-way ranging, time transfer and data transfer. The single photon altimeters evaluate actual altitude through the repetitive detections of single photons of the reflected laser pulses. We propose the single photon altimeter signal processing and data mining algorithm based on the Poisson statistic filter (histogram method) and the modified Kalman filter, providing all common altimetry products (altitude, slope, background photon flux and albedo). The Kalman filter is extended for the background noise filtering, the varying slope adaptation and the non-causal extension for an abrupt slope change. Moreover, the algorithm partially removes the major drawback of a single photon altitude reading, namely that the photon detection measurement statistics must be gathered. The developed algorithm deduces the actual altitude on the basis of a single photon detection; thus, being optimal in the sense that each detected signal photon carrying altitude information is tracked and no altitude information is lost. The algorithm was tested on the simulated datasets and partially cross-probed with the experimental data collected using the developed single photon altimeter breadboard based on the microchip laser with the pulse energy on the order of microjoule and the repetition rate of several kilohertz. We demonstrated that such an altimeter configuration may be utilized for landing or hovering a small body (asteroid, comet).

  13. Research on Ratio of Dosage of Drugs in Traditional Chinese Prescriptions by Data Mining.

    PubMed

    Yu, Xing-Wen; Gong, Qing-Yue; Hu, Kong-Fa; Mao, Wen-Jing; Zhang, Wei-Ming

    2017-01-01

    Maximizing the effectiveness of prescriptions and minimizing adverse effects of drugs is a key component of the health care of patients. In the practice of traditional Chinese medicine (TCM), it is important to provide clinicians a reference for dosing of prescribed drugs. The traditional Cheng-Church biclustering algorithm (CC) is optimized and the data of TCM prescription dose is analyzed by using the optimization algorithm. Based on an analysis of 212 prescriptions related to TCM treatment of kidney diseases, the study generated 87 prescription dose quantum matrices and each sub-matrix represents the referential value of the doses of drugs in different recipes. The optimized CC algorithm can effectively eliminate the interference of zero in the original dose matrix of TCM prescriptions and avoid zero appearing in output sub-matrix. This results in the ability to effectively analyze the reference value of drugs in different prescriptions related to kidney diseases, so as to provide valuable reference for clinicians to use drugs rationally.

  14. JOURNAL CLUB: Plagiarism in Manuscripts Submitted to the AJR: Development of an Optimal Screening Algorithm and Management Pathways.

    PubMed

    Taylor, Donna B

    2017-04-01

    The objective of this study was to investigate the incidence of plagiarism in a sample of manuscripts submitted to the AJR using CrossCheck, develop an algorithm to identify significant plagiarism, and formulate management pathways. A sample of 110 of 1610 (6.8%) manuscripts submitted to AJR in 2014 in the categories of Original Research or Review were analyzed using CrossCheck and manual assessment. The overall similarity index (OSI), highest similarity score from a single source, whether duplication was from single or multiple origins, journal section, and presence or absence of referencing the source were recorded. The criteria outlined by the International Committee of Medical Journal Editors were the reference standard for identifying manuscripts containing plagiarism. Statistical analysis was used to develop a screening algorithm to maximize sensitivity and specificity for the detection of plagiarism. Criteria for defining the severity of plagiarism and management pathways based on the severity of the plagiarism were determined. Twelve manuscripts (10.9%) contained plagiarism. Nine had an OSI excluding quotations and references of less than 20%. In seven, the highest similarity score from a single source was less than 10%. The highest similarity score from a single source was the work of the same author or authors in nine. Common sections for duplication were the Materials and Methods, Discussion, and abstract. Referencing the original source was lacking in 11. Plagiarism was undetected at submission in five of these 12 articles; two had been accepted for publication. The most effective screening algorithm was to average the OSI including quotations and references and the highest similarity score from a single source and to submit manuscripts with an average value of more than 12% for further review. The current methods for detecting plagiarism are suboptimal. A new screening algorithm is proposed.

  15. Spin wave interference in YIG cross junction

    DOE PAGES

    Balinskiy, M.; Gutierrez, D.; Chiang, H.; ...

    2017-01-17

    This work is aimed at studying the interference between backward volume magnetostatic spin waves and magnetostatic surface spin waves in a magnetic cross junction. These two types of magnetostatic waves possess different dispersion with zero frequency overlap in infinite magnetic films. However, the interference may be observed in finite structures due to the effect magnetic shape anisotropy. We report experimental data on spin wave interference in a micrometer size Y 3Fe 2(FeO 4) 3 cross junction. There are four micro antennas fabricated at the edges of the cross arms. Two of these antennas located on the orthogonal arms are usedmore » for spin wave generation, and the other two antennas are used for the inductive voltage detection. The phase difference between the input signals is controlled by the phase shifter. Prominent spin wave interference is observed at the selected combination of operational frequency and bias magnetic field. The maximum On/Off ratio exceeds 30dB at room temperature. The obtained results are important for a variety of magnetic devices based on spin wave interference.« less

  16. Spin structure of the 'Forward' nucleon charge-exchange reaction n + p {yields} p + n and the deuteron charge-exchange breakup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyuboshitz, V. L., E-mail: Valery.Lyuboshitz@jinr.ru; Lyuboshitz, V. V.

    2011-02-15

    The structure of the nucleon charge-exchange process n + p {yields} p + n is investigated basing on the isotopic invariance of the nucleon-nucleon scattering. Using the operator of permutation of the spin projections of the neutron and proton, the connection between the spin matrices, describing the amplitude of the nucleon charge-exchange process at zero angle and the amplitude of the elastic scattering of the neutron on the proton in the 'backward' direction, has been considered. Due to the optical theorem, the spin-independent part of the differential cross section of the process n + p {yields} p + n atmore » zero angle for unpolarized particles is expressed through the difference of total cross sections of unpolarized proton-proton and neutron-proton scattering. Meantime, the spin-dependent part of this cross section is proportional to the differential cross section of the deuteron charge-exchange breakup d + p {yields} (pp) + n at zero angle at the deuteron momentum k{sub d} = 2 k{sub n} (k{sub n} is the initial neutron momentum). Analysis shows that, assuming the real part of the spin-independent term of the 'forward' amplitude of the process n + p {yields} p + n to be smaller or of the same order as compared with the imaginary part, in the wide range of neutron laboratory momenta k{sub n} > 700 MeV/c the main contribution into the differential cross section of the process n + p {yields} p + n at zero angle is provided namely by the spin-dependent term.« less

  17. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    NASA Astrophysics Data System (ADS)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  18. Image restoration by minimizing zero norm of wavelet frame coefficients

    NASA Astrophysics Data System (ADS)

    Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue

    2016-11-01

    In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbara, E. de; Marti, G. V.; Capurro, O. A.

    The detection efficiency of a time-of-flight system based on two micro-channel plates (MCP) time zero detectors plus a conventional silicon surface barrier detector was obtained from heavy ion elastic recoil measurements (this ToF spectrometer is mainly devoted to measurements of total fusion cross section of weakly bound projectiles on different mass-targets systems). In this work we have used beams of {sup 7}Li, {sup 16}O, {sup 32}S and {sup 35}Cl to study the mass region of interest for its application to measurements fusion cross sections in the {sup 6,7}Li+{sup 27}Al systems at energies around and above the Coulomb barrier (0.8V{sub B{<=}}E{<=}2.0V{submore » B}). As the efficiency of a ToF spectrometer is strongly dependent on the energy and mass of the detected particles, we have covered a wide range of the scattered particle energies with a high degree of accuracy at the lowest energies. The different experimental efficiency curves obtained in that way were compared with theoretical electronic stopping power curves on carbon foils and were applied.« less

  20. Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.

    PubMed

    He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L

    2015-10-01

    Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.

  1. Effect of the losses in the vocal tract on determination of the area function.

    PubMed

    Gülmezoğlu, M Bilginer; Barkana, Atalay

    2003-01-01

    In this work, the cross-sectional areas of the vocal tract are determined for the lossy and lossless cases by using the pole-zero models obtained from the electrical equivalent circuit model of the vocal tract and the system identification method. The cross-sectional areas are used to compare the lossy and lossless cases. In the lossy case, the internal losses due to wall vibration, heat conduction, air friction and viscosity are considered, that is, the complex poles and zeros obtained from the models are used directly. Whereas, in the lossless case, only the imaginary parts of these poles and zeros are used. The vocal tract shapes obtained for the lossy case are close to the actual ones.

  2. Multifractal detrending moving-average cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2011-07-01

    There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.

  3. Optical rangefinding applications using communications modulation technique

    NASA Astrophysics Data System (ADS)

    Caplan, William D.; Morcom, Christopher John

    2010-10-01

    A novel range detection technique combines optical pulse modulation patterns with signal cross-correlation to produce an accurate range estimate from low power signals. The cross-correlation peak is analyzed by a post-processing algorithm such that the phase delay is proportional to the range to target. This technique produces a stable range estimate from noisy signals. The advantage is higher accuracy obtained with relatively low optical power transmitted. The technique is useful for low cost, low power and low mass sensors suitable for tactical use. The signal coding technique allows applications including IFF and battlefield identification systems.

  4. Automatic Classification of Sub-Techniques in Classical Cross-Country Skiing Using a Machine Learning Algorithm on Micro-Sensor Data

    PubMed Central

    Seeberg, Trine M.; Tjønnås, Johannes; Haugnes, Pål; Sandbakk, Øyvind

    2017-01-01

    The automatic classification of sub-techniques in classical cross-country skiing provides unique possibilities for analyzing the biomechanical aspects of outdoor skiing. This is currently possible due to the miniaturization and flexibility of wearable inertial measurement units (IMUs) that allow researchers to bring the laboratory to the field. In this study, we aimed to optimize the accuracy of the automatic classification of classical cross-country skiing sub-techniques by using two IMUs attached to the skier’s arm and chest together with a machine learning algorithm. The novelty of our approach is the reliable detection of individual cycles using a gyroscope on the skier’s arm, while a neural network machine learning algorithm robustly classifies each cycle to a sub-technique using sensor data from an accelerometer on the chest. In this study, 24 datasets from 10 different participants were separated into the categories training-, validation- and test-data. Overall, we achieved a classification accuracy of 93.9% on the test-data. Furthermore, we illustrate how an accurate classification of sub-techniques can be combined with data from standard sports equipment including position, altitude, speed and heart rate measuring systems. Combining this information has the potential to provide novel insight into physiological and biomechanical aspects valuable to coaches, athletes and researchers. PMID:29283421

  5. Scoring and staging systems using cox linear regression modeling and recursive partitioning.

    PubMed

    Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H

    2006-01-01

    Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.

  6. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyhan, M; Yue, N

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation.more » Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize user interaction and processing time of radiochromic film used for in vivo dosimetry.« less

  7. Estimation of anomaly location and size using electrical impedance tomography.

    PubMed

    Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun; Woo, Eung Je; Cho, Young Gu

    2003-01-01

    We developed a new algorithm that estimates locations and sizes of anomalies in electrically conducting medium based on electrical impedance tomography (EIT) technique. When only the boundary current and voltage measurements are available, it is not practically feasible to reconstruct accurate high-resolution cross-sectional conductivity or resistivity images of a subject. In this paper, we focus our attention on the estimation of locations and sizes of anomalies with different conductivity values compared with the background tissues. We showed the performance of the algorithm from experimental results using a 32-channel EIT system and saline phantom. With about 1.73% measurement error in boundary current-voltage data, we found that the minimal size (area) of the detectable anomaly is about 0.72% of the size (area) of the phantom. Potential applications include the monitoring of impedance related physiological events and bubble detection in two-phase flow. Since this new algorithm requires neither any forward solver nor time-consuming minimization process, it is fast enough for various real-time applications in medicine and nondestructive testing.

  8. Detection of Nitrogen Content in Rubber Leaves Using Near-Infrared (NIR) Spectroscopy with Correlation-Based Successive Projections Algorithm (SPA).

    PubMed

    Tang, Rongnian; Chen, Xupeng; Li, Chuang

    2018-05-01

    Near-infrared spectroscopy is an efficient, low-cost technology that has potential as an accurate method in detecting the nitrogen content of natural rubber leaves. Successive projections algorithm (SPA) is a widely used variable selection method for multivariate calibration, which uses projection operations to select a variable subset with minimum multi-collinearity. However, due to the fluctuation of correlation between variables, high collinearity may still exist in non-adjacent variables of subset obtained by basic SPA. Based on analysis to the correlation matrix of the spectra data, this paper proposed a correlation-based SPA (CB-SPA) to apply the successive projections algorithm in regions with consistent correlation. The result shows that CB-SPA can select variable subsets with more valuable variables and less multi-collinearity. Meanwhile, models established by the CB-SPA subset outperform basic SPA subsets in predicting nitrogen content in terms of both cross-validation and external prediction. Moreover, CB-SPA is assured to be more efficient, for the time cost in its selection procedure is one-twelfth that of the basic SPA.

  9. Traffic Noise Ground Attenuation Algorithm Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, Lloyd Allen

    The Federal Highway Administration traffic noise prediction program, STAMINA 2.0, was evaluated for its accuracy. In addition, the ground attenuation algorithm used in the Ontario ORNAMENT method was evaluated to determine its potential to improve these predictions. Field measurements of sound levels were made at 41 sites on I-440 in Nashville, Tennessee in order to both study noise barrier effectiveness and to evaluate STAMINA 2.0 and the performance of the ORNAMENT ground attenuation algorithm. The measurement sites, which contain large variations in terrain, included several cross sections. Further, all sites contain some type of barrier, natural or constructed, which could more fully expose the strength and weaknesses of the ground attenuation algorithms. The noise barrier evaluation was accomplished in accordance with American National Standard Methods for Determination of Insertion Loss of Outdoor Noise Barriers which resulted in an evaluation of this standard. The entire 7.2 mile length of I-440 was modeled using STAMINA 2.0. A multiple run procedure was developed to emulate the results that would be obtained if the ORNAMENT algorithm was incorporated into STAMINA 2.0. Finally, the predicted noise levels based on STAMINA 2.0 and STAMINA with the ORNAMENT ground attenuation algorithm were compared with each other and with the field measurements. It was found that STAMINA 2.0 overpredicted noise levels by an average of over 2 dB for the receivers on I-440, whereas, the STAMINA with ORNAMENT ground attenuation algorithm overpredicted noise levels by an average of less than 0.5 dB. The mean errors for the two predictions were found to be statistically different from each other, and the mean error for the prediction with the ORNAMENT ground attenuation algorithm was not found to be statistically different from zero. The STAMINA 2.0 program predicts little, if any, ground attenuation for receivers at typical first-row distances from highways where noise barriers are used. The ORNAMENT ground attenuation algorithm, which recognizes and better compensates for the presence of obstacles in the propagation path of a sound wave, predicted significant amounts of ground attenuation for most sites.

  10. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    PubMed

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. An adaptive clustering algorithm for image matching based on corner feature

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  12. Automated Algorithm to Detect Changes in Geostationary Satellites Configuration and Cross-Tagging

    DTIC Science & Technology

    2015-10-18

    Color Photometry Catalog (GCPC) and Geo Observations with Latitudinal Diversity Simultaneously ( GOLDS ) data sets are used to simulate configuration...stabilized geostationary satellites in the Geostationary Observations with Latitudinal Diversity Simultaneously ( GOLDS ) and GEO satellite Color...107.3o W. The GOLDS campaign was conducted in two phases, both lead by AFRL’s Space Vehicles Directorate with the participation of a consortium

  13. Laser Covariance Vibrometry for Unsymmetrical Mode Detection

    DTIC Science & Technology

    2006-09-01

    surface rough- ness. Results show that the remote sensing spectra adequately match the structural vibration, including non – imaging spatially...the speckle. 10 profile (cross – section), is an air turbulence effect ignored in this work that will affect both the sensed vibration phase change and...like spike impulse. 13 Chapter three describes optical processing issues. This chapter delineates the image propagation algorithms used for the work

  14. Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2016-05-01

    We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.

  15. Continuous detection and decoding of dexterous finger flexions with implantable myoelectric sensors.

    PubMed

    Baker, Justin J; Scheme, Erik; Englehart, Kevin; Hutchinson, Douglas T; Greger, Bradley

    2010-08-01

    A rhesus monkey was trained to perform individuated and combined finger flexions of the thumb, index, and middle finger. Nine implantable myoelectric sensors (IMES) were then surgically implanted into the finger muscles of the monkey's forearm, without any adverse effects over two years postimplantation. Using an inductive link, EMG was wirelessly recorded from the IMES as the monkey performed a finger flexion task. The EMG from the different IMES implants showed very little cross correlation. An offline parallel linear discriminant analysis (LDA) based algorithm was used to decode finger activity based on features extracted from continuously presented frames of recorded EMG. The offline parallel LDA was run on intraday sessions as well as on sessions where the algorithm was trained on one day and tested on following days. The performance of the algorithm was evaluated continuously by comparing classification output by the algorithm to the current state of the finger switches. The algorithm detected and classified seven different finger movements, including individual and combined finger flexions, and a no-movement state (chance performance = 12.5%) . When the algorithm was trained and tested on data collected the same day, the average performance was 43.8+/-3.6% n=10. When the training-testing separation period was five months, the average performance of the algorithm was 46.5+/-3.4% n=8. These results demonstrated that using EMG recorded and wirelessly transmitted by IMES offers a promising approach for providing intuitive, dexterous control of artificial limbs where human patients have sufficient, functional residual muscle following amputation.

  16. Mathematical algorithm for the automatic recognition of intestinal parasites.

    PubMed

    Alva, Alicia; Cangalaya, Carla; Quiliano, Miguel; Krebs, Casey; Gilman, Robert H; Sheen, Patricia; Zimic, Mirko

    2017-01-01

    Parasitic infections are generally diagnosed by professionals trained to recognize the morphological characteristics of the eggs in microscopic images of fecal smears. However, this laboratory diagnosis requires medical specialists which are lacking in many of the areas where these infections are most prevalent. In response to this public health issue, we developed a software based on pattern recognition analysis from microscopi digital images of fecal smears, capable of automatically recognizing and diagnosing common human intestinal parasites. To this end, we selected 229, 124, 217, and 229 objects from microscopic images of fecal smears positive for Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica, respectively. Representative photographs were selected by a parasitologist. We then implemented our algorithm in the open source program SCILAB. The algorithm processes the image by first converting to gray-scale, then applies a fourteen step filtering process, and produces a skeletonized and tri-colored image. The features extracted fall into two general categories: geometric characteristics and brightness descriptions. Individual characteristics were quantified and evaluated with a logistic regression to model their ability to correctly identify each parasite separately. Subsequently, all algorithms were evaluated for false positive cross reactivity with the other parasites studied, excepting Taenia sp. which shares very few morphological characteristics with the others. The principal result showed that our algorithm reached sensitivities between 99.10%-100% and specificities between 98.13%- 98.38% to detect each parasite separately. We did not find any cross-positivity in the algorithms for the three parasites evaluated. In conclusion, the results demonstrated the capacity of our computer algorithm to automatically recognize and diagnose Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica with a high sensitivity and specificity.

  17. Mathematical algorithm for the automatic recognition of intestinal parasites

    PubMed Central

    Alva, Alicia; Cangalaya, Carla; Quiliano, Miguel; Krebs, Casey; Gilman, Robert H.; Sheen, Patricia; Zimic, Mirko

    2017-01-01

    Parasitic infections are generally diagnosed by professionals trained to recognize the morphological characteristics of the eggs in microscopic images of fecal smears. However, this laboratory diagnosis requires medical specialists which are lacking in many of the areas where these infections are most prevalent. In response to this public health issue, we developed a software based on pattern recognition analysis from microscopi digital images of fecal smears, capable of automatically recognizing and diagnosing common human intestinal parasites. To this end, we selected 229, 124, 217, and 229 objects from microscopic images of fecal smears positive for Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica, respectively. Representative photographs were selected by a parasitologist. We then implemented our algorithm in the open source program SCILAB. The algorithm processes the image by first converting to gray-scale, then applies a fourteen step filtering process, and produces a skeletonized and tri-colored image. The features extracted fall into two general categories: geometric characteristics and brightness descriptions. Individual characteristics were quantified and evaluated with a logistic regression to model their ability to correctly identify each parasite separately. Subsequently, all algorithms were evaluated for false positive cross reactivity with the other parasites studied, excepting Taenia sp. which shares very few morphological characteristics with the others. The principal result showed that our algorithm reached sensitivities between 99.10%-100% and specificities between 98.13%- 98.38% to detect each parasite separately. We did not find any cross-positivity in the algorithms for the three parasites evaluated. In conclusion, the results demonstrated the capacity of our computer algorithm to automatically recognize and diagnose Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica with a high sensitivity and specificity. PMID:28410387

  18. Measurement invariance of the Belief in a Zero-Sum Game scale across 36 countries.

    PubMed

    Różycka-Tran, Joanna; Jurek, Paweł; Olech, Michał; Piotrowski, Jarosław; Żemojtel-Piotrowska, Magdalena

    2017-11-28

    In this paper, we examined the psychometric properties of cross-cultural validation and replicability (i.e. measurement invariance) of the Belief in a Zero-Sum Game (BZSG) scale, measuring antagonistic belief about interpersonal relations over scarce resources. The factorial structure of the BZSG scale was investigated in student samples from 36 countries (N = 9907), using separate confirmatory factor analyses (CFAs) for each country. The cross-cultural validation of the scale was based on multigroup confirmatory factor analyses (MGCFA). The results confirmed that the scale had a one-factor structure in all countries, in which configural and metric invariance between countries was confirmed. As a zero-sum belief about social relations perceived as antagonistic, BZSG is an important factor related to, for example, social and international relations, attitudes toward immigrants, or well-being. The paper proposes different uses of the BZSG scale for cross-cultural studies in different fields of psychology: social, political, or economic. © 2017 International Union of Psychological Science.

  19. Congestion control and routing over satellite networks

    NASA Astrophysics Data System (ADS)

    Cao, Jinhua

    Satellite networks and transmissions find their application in fields of computer communications, telephone communications, television broadcasting, transportation, space situational awareness systems and so on. This thesis mainly focuses on two networking issues affecting satellite networking: network congestion control and network routing optimization. Congestion, which leads to long queueing delays, packet losses or both, is a networking problem that has drawn the attention of many researchers. The goal of congestion control mechanisms is to ensure high bandwidth utilization while avoiding network congestion by regulating the rate at which traffic sources inject packets into a network. In this thesis, we propose a stable congestion controller using data-driven, safe switching control theory to improve the dynamic performance of satellite Transmission Control Protocol/Active Queue Management (TCP/AQM) networks. First, the stable region of the Proportional-Integral (PI) parameters for a nominal model is explored. Then, a PI controller, whose parameters are adaptively tuned by switching among members of a given candidate set, using observed plant data, is presented and compared with some classical AQM policy examples, such as Random Early Detection (RED) and fixed PI control. A new cost detectable switching law with an interval cost function switching algorithm, which improves the performance and also saves the computational cost, is developed and compared with a law commonly used in the switching control literature. Finite-gain stability of the system is proved. A fuzzy logic PI controller is incorporated as a special candidate to achieve good performance at all nominal points with the available set of candidate controllers. Simulations are presented to validate the theory. An effocient routing algorithm plays a key role in optimizing network resources. In this thesis, we briefly analyze Low Earth Orbit (LEO) satellite networks, review the Cross Entropy (CE) method and then develop a novel on-demand routing system named Cross Entropy Accelerated Ant Routing System (CEAARS) for regular constellation LEO satellite networks. By implementing simulations on an Iridium-like satellite network, we compare the proposed CEAARS algorithm with the two approaches to adaptive routing protocols on the Internet: distance-vector (DV) and link-state (LS), as well as with the original Cross Entropy Ant Routing System (CEARS). DV algorithms are based on distributed Bellman Ford algorithm, and LS algorithms are implementation of Dijkstras single source shortest path. The results show that CEAARS not only remarkably improves the convergence speed of achieving optimal or suboptimal paths, but also reduces the number of overhead ants (management packets).

  20. A fragile zero watermarking scheme to detect and characterize malicious modifications in database relations.

    PubMed

    Khan, Aihab; Husain, Syed Afaq

    2013-01-01

    We put forward a fragile zero watermarking scheme to detect and characterize malicious modifications made to a database relation. Most of the existing watermarking schemes for relational databases introduce intentional errors or permanent distortions as marks into the database original content. These distortions inevitably degrade the data quality and data usability as the integrity of a relational database is violated. Moreover, these fragile schemes can detect malicious data modifications but do not characterize the tempering attack, that is, the nature of tempering. The proposed fragile scheme is based on zero watermarking approach to detect malicious modifications made to a database relation. In zero watermarking, the watermark is generated (constructed) from the contents of the original data rather than introduction of permanent distortions as marks into the data. As a result, the proposed scheme is distortion-free; thus, it also resolves the inherent conflict between security and imperceptibility. The proposed scheme also characterizes the malicious data modifications to quantify the nature of tempering attacks. Experimental results show that even minor malicious modifications made to a database relation can be detected and characterized successfully.

  1. Application of fast Fourier transform cross-correlation and mass spectrometry data for accurate alignment of chromatograms.

    PubMed

    Zheng, Yi-Bao; Zhang, Zhi-Min; Liang, Yi-Zeng; Zhan, De-Jian; Huang, Jian-Hua; Yun, Yong-Huan; Xie, Hua-Lin

    2013-04-19

    Chromatography has been established as one of the most important analytical methods in the modern analytical laboratory. However, preprocessing of the chromatograms, especially peak alignment, is usually a time-consuming task prior to extracting useful information from the datasets because of the small unavoidable differences in the experimental conditions caused by minor changes and drift. Most of the alignment algorithms are performed on reduced datasets using only the detected peaks in the chromatograms, which means a loss of data and introduces the problem of extraction of peak data from the chromatographic profiles. These disadvantages can be overcome by using the full chromatographic information that is generated from hyphenated chromatographic instruments. A new alignment algorithm called CAMS (Chromatogram Alignment via Mass Spectra) is present here to correct the retention time shifts among chromatograms accurately and rapidly. In this report, peaks of each chromatogram were detected based on Continuous Wavelet Transform (CWT) with Haar wavelet and were aligned against the reference chromatogram via the correlation of mass spectra. The aligning procedure was accelerated by Fast Fourier Transform cross correlation (FFT cross correlation). This approach has been compared with several well-known alignment methods on real chromatographic datasets, which demonstrates that CAMS can preserve the shape of peaks and achieve a high quality alignment result. Furthermore, the CAMS method was implemented in the Matlab language and available as an open source package at http://www.github.com/matchcoder/CAMS. Copyright © 2013. Published by Elsevier B.V.

  2. A switching formation strategy for obstacle avoidance of a multi-robot system based on robot priority model.

    PubMed

    Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu

    2015-05-01

    This paper describes a switching formation strategy for multi-robots with velocity constraints to avoid and cross obstacles. In the strategy, a leader robot plans a safe path using the geometric obstacle avoidance control method (GOACM). By calculating new desired distances and bearing angles with the leader robot, the follower robots switch into a safe formation. With considering collision avoidance, a novel robot priority model, based on the desired distance and bearing angle between the leader and follower robots, is designed during the obstacle avoidance process. The adaptive tracking control algorithm guarantees that the trajectory and velocity tracking errors converge to zero. To demonstrate the validity of the proposed methods, simulation and experiment results present that multi-robots effectively form and switch formation avoiding obstacles without collisions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Virtual Tool Mark Generation for Efficient Striation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekstrand, Laura; Zhang, Song; Grieve, Taylor

    2014-02-16

    This study introduces a tool mark analysis approach based upon 3D scans of screwdriver tip and marked plate surfaces at the micrometer scale from an optical microscope. An open-source 3D graphics software package is utilized to simulate the marking process as the projection of the tip's geometry in the direction of tool travel. The edge of this projection becomes a virtual tool mark that is compared to cross-sections of the marked plate geometry using the statistical likelihood algorithm introduced by Chumbley et al. In a study with both sides of six screwdriver tips and 34 corresponding marks, the method distinguishedmore » known matches from known nonmatches with zero false-positive matches and two false-negative matches. For matches, it could predict the correct marking angle within ±5–10°. Individual comparisons could be made in seconds on a desktop computer, suggesting that the method could save time for examiners.« less

  4. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  5. Crossing the other side of the algorithm: a challenging case of adrenal Cushing's syndrome.

    PubMed

    Antonio, Imelda Digna Soberano; Sandoval, Mark Anthony Santiago; Lantion-Ang, Frances Lina

    2011-12-01

    The diagnosis of endogenous Cushing's syndrome and its aetiology involved documenting the hypercotisolism and then determining whether that hypercortisolism is adrenocorticotropic hormone-dependent (ACTH-dependent) or not. Hence, following the algorithm, an undetected ACTH level points to an adrenal Cushing's while a detectable or elevated ACTH level points to either a pituitary or ectopic Cushing's syndrome. The authors present a case of florid adrenal Cushing's syndrome initially presenting with a normal ACTH level, which led to the investigation for an ACTH-secreting tumour. Adding to the confusion, a MRI done showed an intrasellar focus. Knowledge of how ACTH-dependent (versus ACTH-independent) Cushing's syndrome manifests clinically, supported by results of repeat laboratory tests, led to the true diagnosis. This case illustrates that a detectable ACTH does not rule out an adrenal Cushing's syndrome nor does a positive pituitary imaging confirm Cushing's disease.

  6. Automatic detection of motor unit innervation zones of the external anal sphincter by multichannel surface EMG.

    PubMed

    Ullah, Khalil; Cescon, Corrado; Afsharipour, Babak; Merletti, Roberto

    2014-12-01

    A method to detect automatically the location of innervation zones (IZs) from 16-channel surface EMG (sEMG) recordings from the external anal sphincter (EAS) muscle is presented in order to guide episiotomy during child delivery. The new algorithm (2DCorr) is applied to individual motor unit action potential (MUAP) templates and is based on bidimensional cross correlation between the interpolated image of each MUAP template and two images obtained by flipping upside-down (around a horizontal axis) and left-right (around a vertical axis) the original one. The method was tested on 640 simulated MUAP templates of the sphincter muscle and compared with previously developed algorithms (Radon Transform, RT; Template Match, TM). Experimental signals were detected from the EAS of 150 subjects using an intra-anal probe with 16 equally spaced circumferential electrodes. The results of the three algorithms were compared with the actual IZ location (simulated signal) and with IZ location provided by visual analysis (VA) (experimental signals). For simulated signals, the inter quartile error range (IQR) between the estimated and the actual locations of the IZ was 0.20, 0.23, 0.42, and 2.32 interelectrode distances (IED) for the VA, 2DCorr, RT and TM methods respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Diagnostic power of diffuse reflectance spectroscopy for targeted detection of breast lesions with microcalcifications

    PubMed Central

    Soares, Jaqueline S.; Barman, Ishan; Dingari, Narahara Chari; Volynskaya, Zoya; Liu, Wendy; Klein, Nina; Plecha, Donna; Dasari, Ramachandra R.; Fitzmaurice, Maryann

    2013-01-01

    Microcalcifications geographically target the location of abnormalities within the breast and are of critical importance in breast cancer diagnosis. However, despite stereotactic guidance, core needle biopsy fails to retrieve microcalcifications in up to 15% of patients. Here, we introduce an approach based on diffuse reflectance spectroscopy for detection of microcalcifications that focuses on variations in optical absorption stemming from the calcified clusters and the associated cross-linking molecules. In this study, diffuse reflectance spectra are acquired ex vivo from 203 sites in fresh biopsy tissue cores from 23 patients undergoing stereotactic breast needle biopsies. By correlating the spectra with the corresponding radiographic and histologic assessment, we have developed a support vector machine-derived decision algorithm, which shows high diagnostic power (positive predictive value and negative predictive value of 97% and 88%, respectively) for diagnosis of lesions with microcalcifications. We further show that these results are robust and not due to any spurious correlations. We attribute our findings to the presence of proteins (such as elastin), and desmosine and isodesmosine cross-linkers in the microcalcifications. It is important to note that the performance of the diffuse reflectance decision algorithm is comparable to one derived from the corresponding Raman spectra, and the considerably higher intensity of the reflectance signal enables the detection of the targeted lesions in a fraction of the spectral acquisition time. Our findings create a unique landscape for spectroscopic validation of breast core needle biopsy for detection of microcalcifications that can substantially improve the likelihood of an adequate, diagnostic biopsy in the first attempt. PMID:23267090

  8. Electromagnetic scattering laws in Weyl systems.

    PubMed

    Zhou, Ming; Ying, Lei; Lu, Ling; Shi, Lei; Zi, Jian; Yu, Zongfu

    2017-11-09

    Wavelength determines the length scale of the cross section when electromagnetic waves are scattered by an electrically small object. The cross section diverges for resonant scattering, and diminishes for non-resonant scattering, when wavelength approaches infinity. This scattering law explains the colour of the sky as well as the strength of a mobile phone signal. We show that such wavelength scaling comes from the conical dispersion of free space at zero frequency. Emerging Weyl systems, offering similar dispersion at non-zero frequencies, lead to new laws of electromagnetic scattering that allow cross sections to be decoupled from the wavelength limit. Diverging and diminishing cross sections can be realized at any target wavelength in a Weyl system, providing the ability to tailor the strength of wave-matter interactions for radiofrequency and optical applications.

  9. Comparison of human and algorithmic target detection in passive infrared imagery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.; Hutchinson, Meredith

    2003-09-01

    We have designed an experiment that compares the performance of human observers and a scale-insensitive target detection algorithm that uses pixel level information for the detection of ground targets in passive infrared imagery. The test database contains targets near clutter whose detectability ranged from easy to very difficult. Results indicate that human observers detect more "easy-to-detect" targets, and with far fewer false alarms, than the algorithm. For "difficult-to-detect" targets, human and algorithm detection rates are considerably degraded, and algorithm false alarms excessive. Analysis of detections as a function of observer confidence shows that algorithm confidence attribution does not correspond to human attribution, and does not adequately correlate with correct detections. The best target detection score for any human observer was 84%, as compared to 55% for the algorithm for the same false alarm rate. At 81%, the maximum detection score for the algorithm, the same human observer had 6 false alarms per frame as compared to 29 for the algorithm. Detector ROC curves and observer-confidence analysis benchmarks the algorithm and provides insights into algorithm deficiencies and possible paths to improvement.

  10. Dynamic Vehicle Detection via the Use of Magnetic Field Sensors

    PubMed Central

    Markevicius, Vytautas; Navikas, Dangirutis; Zilys, Mindaugas; Andriukaitis, Darius; Valinevicius, Algimantas; Cepenas, Mindaugas

    2016-01-01

    The vehicle detection process plays the key role in determining the success of intelligent transport management system solutions. The measurement of distortions of the Earth’s magnetic field using magnetic field sensors served as the basis for designing a solution aimed at vehicle detection. In accordance with the results obtained from research into process modeling and experimentally testing all the relevant hypotheses an algorithm for vehicle detection using the state criteria was proposed. Aiming to evaluate all of the possibilities, as well as pros and cons of the use of anisotropic magnetoresistance (AMR) sensors in the transport flow control process, we have performed a series of experiments with various vehicles (or different series) from several car manufacturers. A comparison of 12 selected methods, based on either the process of determining the peak signal values and their concurrence in time whilst calculating the delay, or by measuring the cross-correlation of these signals, was carried out. It was established that the relative error can be minimized via the Z component cross-correlation and Kz criterion cross-correlation methods. The average relative error of vehicle speed determination in the best case did not exceed 1.5% when the distance between sensors was set to 2 m. PMID:26797615

  11. Computational study of scattering of a zero-order Bessel beam by large nonspherical homogeneous particles with the multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang

    2017-12-01

    Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.

  12. Classifying Volcanic Activity Using an Empirical Decision Making Algorithm

    NASA Astrophysics Data System (ADS)

    Junek, W. N.; Jones, W. L.; Woods, M. T.

    2012-12-01

    Detection and classification of developing volcanic activity is vital to eruption forecasting. Timely information regarding an impending eruption would aid civil authorities in determining the proper response to a developing crisis. In this presentation, volcanic activity is characterized using an event tree classifier and a suite of empirical statistical models derived through logistic regression. Forecasts are reported in terms of the United States Geological Survey (USGS) volcano alert level system. The algorithm employs multidisciplinary data (e.g., seismic, GPS, InSAR) acquired by various volcano monitoring systems and source modeling information to forecast the likelihood that an eruption, with a volcanic explosivity index (VEI) > 1, will occur within a quantitatively constrained area. Logistic models are constructed from a sparse and geographically diverse dataset assembled from a collection of historic volcanic unrest episodes. Bootstrapping techniques are applied to the training data to allow for the estimation of robust logistic model coefficients. Cross validation produced a series of receiver operating characteristic (ROC) curves with areas ranging between 0.78-0.81, which indicates the algorithm has good predictive capabilities. The ROC curves also allowed for the determination of a false positive rate and optimum detection for each stage of the algorithm. Forecasts for historic volcanic unrest episodes in North America and Iceland were computed and are consistent with the actual outcome of the events.

  13. [The endpoint detection of cough signal in continuous speech].

    PubMed

    Yang, Guoqing; Mo, Hongqiang; Li, Wen; Lian, Lianfang; Zheng, Zeguang

    2010-06-01

    The endpoint detection of cough signal in continuous speech has been researched in order to improve the efficiency and veracity of manual recognition or computer-based automatic recognition. First, using the short time zero crossing ratio(ZCR) for identifying the suspicious coughs and getting the threshold of short time energy based on acoustic characteristics of cough. Then, the short time energy is combined with short time ZCR in order to implement the endpoint detection of cough in continuous speech. To evaluate the effect of the method, first, the virtual number of coughs in each recording was identified by two experienced doctors using the graphical user interface (GUI). Second, the recordings were analyzed by automatic endpoint detection program under Matlab7.0. Finally, the comparison between these two results showed: The error rate of undetected cough is 2.18%, and 98.13% of noise, silence and speech were removed. The way of setting short time energy threshold is robust. The endpoint detection program can remove most speech and noise, thus maintaining a lower rate of error.

  14. Recent advances and remaining challenges for the spectroscopic detection of explosive threats.

    PubMed

    Fountain, Augustus W; Christesen, Steven D; Moon, Raphael P; Guicheteau, Jason A; Emmons, Erik D

    2014-01-01

    In 2010, the U.S. Army initiated a program through the Edgewood Chemical Biological Center to identify viable spectroscopic signatures of explosives and initiate environmental persistence, fate, and transport studies for trace residues. These studies were ultimately designed to integrate these signatures into algorithms and experimentally evaluate sensor performance for explosives and precursor materials in existing chemical point and standoff detection systems. Accurate and validated optical cross sections and signatures are critical in benchmarking spectroscopic-based sensors. This program has provided important information for the scientists and engineers currently developing trace-detection solutions to the homemade explosive problem. With this information, the sensitivity of spectroscopic methods for explosives detection can now be quantitatively evaluated before the sensor is deployed and tested.

  15. On the zero-crossing of the three-gluon Green's function from lattice simulations

    NASA Astrophysics Data System (ADS)

    Athenodorou, Andreas; Boucaud, Philippe; de Soto, Feliciano; Rodríguez-Quintero, José; Zafeiropoulos, Savvas

    2018-03-01

    We report on some efforts recently made in order to gain a better understanding of some IR properties of the 3-point gluon Green's function by exploiting results from large-volume quenched lattice simulations. These lattice results have been obtained by using both tree-level Symanzik and the standard Wilson action, in the aim of assessing the possible impact of effects presumably resulting from a particular choice for the discretization of the action. The main resulting feature is the existence of a negative log-aritmic divergence at zero-momentum, which pulls the 3-gluon form factors down at low momenta and, consequently, yields a zero-crossing at a given deep IR momentum. The results can be correctly explained by analyzing the relevant Dyson-Schwinger equations and appropriate truncation schemes.

  16. Distributed convex optimisation with event-triggered communication in networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Jiayun; Chen, Weisheng

    2016-12-01

    This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.

  17. Simultaneous determination of dextromethorphan HBr and bromhexine HCl in tablets by first-derivative spectrophotometry.

    PubMed

    Tantishaiyakul, V; Poeaknapo, C; Sribun, P; Sirisuppanon, K

    1998-06-01

    A rapid, simple and direct assay procedure based on first-derivative spectrophotometry, using a zero-crossing and peak-to-base measurement at 234 and 324 nm, respectively, has been developed for the specific determination of dextromethorphan HBr and bromhexine HCl in tablets. Calibration graphs were linear with the correlation coefficients of 0.9999 for both analytes. The limit of detections were 0.033 and 0.103 microgram ml-1 for dextromethorphan HBr and bromhexine HCl, respectively. A HPLC method has been developed as the reference method. The results obtained by the first-derivative spectrophotometry were in good agreement with those found by the HPLC method.

  18. Autonomous Rovers for Polar Science Campaigns

    NASA Astrophysics Data System (ADS)

    Lever, J. H.; Ray, L. E.; Williams, R. M.; Morlock, A. M.; Burzynski, A. M.

    2012-12-01

    We have developed and deployed two over-snow autonomous rovers able to conduct remote science campaigns on Polar ice sheets. Yeti is an 80-kg, four-wheel-drive (4WD) battery-powered robot with 3 - 4 hr endurance, and Cool Robot is a 60-kg 4WD solar-powered robot with unlimited endurance during Polar summers. Both robots navigate using GPS waypoint-following to execute pre-planned courses autonomously, and they can each carry or tow 20 - 160 kg instrument payloads over typically firm Polar snowfields. In 2008 - 12, we deployed Yeti to conduct autonomous ground-penetrating radar (GPR) surveys to detect hidden crevasses to help establish safe routes for overland resupply of research stations at South Pole, Antarctica, and Summit, Greenland. We also deployed Yeti with GPR at South Pole in 2011 to identify the locations of potentially hazardous buried buildings from the original 1950's-era station. Autonomous surveys remove personnel from safety risks posed during manual GPR surveys by undetected crevasses or buried buildings. Furthermore, autonomous surveys can yield higher quality and more comprehensive data than manual ones: Yeti's low ground pressure (20 kPa) allows it to cross thinly bridged crevasses or other voids without interrupting a survey, and well-defined survey grids allow repeated detection of buried voids to improve detection reliability and map their extent. To improve survey efficiency, we have automated the mapping of detected hazards, currently identified via post-survey manual review of the GPR data. Additionally, we are developing machine-learning algorithms to detect crevasses autonomously in real time, with reliability potentially higher than manual real-time detection. These algorithms will enable the rover to relay crevasse locations to a base station for near real-time mapping and decision-making. We deployed Cool Robot at Summit Station in 2005 to verify its mobility and power budget over Polar snowfields. Using solar power, this zero-emissions rover could travel more than 500 km per week during Polar summers and provide 100 - 200 W to power instrument payloads to help investigate the atmosphere, magnetosphere, glaciology and sub-glacial geology in Antarctica and Greenland. We are currently upgrading Cool Robot's navigation and solar-power systems and will deploy it during 2013 to map the emissions footprint around Summit Station to demonstrate its potential to execute long-endurance Polar science campaigns. These rovers could assist science traverses to chart safe routes into the interior of Antarctica and Greenland or conduct autonomous, remote science campaigns to extend spatial and temporal coverage for data collection. Our goals include 1,000 - 2,000-km summertime traverses of Antarctica and Greenland, safe navigation through 0.5-m amplitude sastrugi fields, survival in blizzards, and rover-network adaptation to research events of opportunity. We are seeking Polar scientists interested in autonomous, mobile data collection and can adapt the rovers to meet their requirements.

  19. Grasping rigid objects in zero-g

    NASA Astrophysics Data System (ADS)

    Anderson, Greg D.

    1993-12-01

    The extra vehicular activity helper/retriever (EVAHR) is a prototype for an autonomous free- flying robotic astronaut helper. The ability to grasp a moving object is a fundamental skill required for any autonomous free-flyer. This paper discusses an algorithm that couples resolved acceleration control with potential field based obstacle avoidance to enable a manipulator to track and capture a rigid object in (imperfect) zero-g while avoiding joint limits, singular configurations, and unintentional impacts between the manipulator and the environment.

  20. Shielded loaded bowtie antenna incorporating the presence of paving structure for improved GPR pipe detection

    NASA Astrophysics Data System (ADS)

    Seyfried, Daniel; Jansen, Ronald; Schoebel, Joerg

    2014-12-01

    In civil engineering Ground Penetrating Radar becomes more and more a considerable tool for nondestructive testing and exploration of the underground. For example, the detection of existence of utilization pipe networks prior to construction works or detection of damaged spot beneath a paved street is a highly advantageous application. However, different surface conditions as well as ground bounce reflection and antenna cross-talk may seriously affect the detection capability of the entire radar system. Therefore, proper antenna design is an essential part in order to obtain radar data of high quality. In this paper we redesign a given loaded bowtie antenna in order to reduce strong and unwanted signal contributions such as ground bounce reflection and antenna cross-talk. During the optimization process we also review all parameters of our existing antenna in order to maximize energy transfer into ground. The entire process incorporating appropriate simulations along with running measurements on our GPR test site where we buried different types of pipes and cables for testing and developing radar hardware and software algorithms under quasi-real conditions is described in this paper.

  1. Preliminary analysis of cross beam data from the Gun Barrel Hill site

    NASA Technical Reports Server (NTRS)

    Sandborn, V. A.; Bice, A. R.; Cliff, W. C.; Hablutzel, B. C.

    1974-01-01

    Preliminary evaluation of cross beam data taken at the Gun Barrell Hill test site of ESSA is presented. The evaluation is made using the analog Princeton Time Correlator. A study of the frequency band width limitations of the Princeton Time Correlator is made. Based on the band width limitations, it is possible to demonstrate that nearly identical correlation is obtained for frequencies from .01 to 3.9 hertz. Difficulty is encountered in that maximums in the correlation curves do not occur at zero time lag for zero beam separations.

  2. Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruffio, Jean-Baptiste; Macintosh, Bruce; Nielsen, Eric L.

    We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratiomore » (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.« less

  3. Experimental and environmental factors affect spurious detection of ecological thresholds

    USGS Publications Warehouse

    Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.

    2012-01-01

    Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.

  4. An automatic P‐Phase arrival‐time picker

    USGS Publications Warehouse

    Kalkan, Erol

    2016-01-01

    Presented is a new approach for picking P‐phase arrival time in single‐component acceleration or broadband velocity records without requiring detection interval or threshold settings. The algorithm PPHASEPICKER transforms the signal into a response domain of a single‐degree‐of‐freedom (SDOF) oscillator with viscous damping and then tracks the rate of change of dissipated damping energy to pick P‐wave phases. The SDOF oscillator has a short natural period and a correspondingly high resonant frequency, which is higher than most frequencies in a seismic wave. It also has a high damping ratio (60% of critical). At this damping level, the frequency response approaches the Butterworth maximally flat magnitude filter, and phase angles are preserved. The relative input energy imparted to the oscillator by the input signal is converted to elastic strain energy and then dissipated by the damping element as damping energy. The damping energy yields a smooth envelope over time; it is zero in the beginning of the signal, zero or near zero before theP‐phase arrival, and builds up rapidly with the P wave. Because the damping energy function changes considerably at the onset of the P wave, it is used as a metric to track and pick the P‐phase arrival time. The PPHASEPICKER detects P‐phase onset using the histogram method. Its performance is compared with picking techniques using short‐term‐average to long‐term‐average ratio, and a picking method that finds the first P‐phase arrival time using the Akaike information criterion. A large set of records with various intensities and signal‐to‐noise ratios is used for testing the PPHASEPICKER, and it is demonstrated thatPPHASEPICKER is able to more accurately pick the onset of genuine signals against the background noise and to correctly distinguish between whether the first arrival is a P wave (emergent or impulsive) or whether the signal is from a faulty sensor.

  5. Morphological inversion of complex diffusion

    NASA Astrophysics Data System (ADS)

    Nguyen, V. A. T.; Vural, D. C.

    2017-09-01

    Epidemics, neural cascades, power failures, and many other phenomena can be described by a diffusion process on a network. To identify the causal origins of a spread, it is often necessary to identify the triggering initial node. Here, we define a new morphological operator and use it to detect the origin of a diffusive front, given the final state of a complex network. Our method performs better than algorithms based on distance (closeness) and Jordan centrality. More importantly, our method is applicable regardless of the specifics of the forward model, and therefore can be applied to a wide range of systems such as identifying the patient zero in an epidemic, pinpointing the neuron that triggers a cascade, identifying the original malfunction that causes a catastrophic infrastructure failure, and inferring the ancestral species from which a heterogeneous population evolves.

  6. Rational-spline approximation with automatic tension adjustment

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Kerr, P. A.

    1984-01-01

    An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.

  7. Application of a fast sorting algorithm to the assignment of mass spectrometric cross-linking data.

    PubMed

    Petrotchenko, Evgeniy V; Borchers, Christoph H

    2014-09-01

    Cross-linking combined with MS involves enzymatic digestion of cross-linked proteins and identifying cross-linked peptides. Assignment of cross-linked peptide masses requires a search of all possible binary combinations of peptides from the cross-linked proteins' sequences, which becomes impractical with increasing complexity of the protein system and/or if digestion enzyme specificity is relaxed. Here, we describe the application of a fast sorting algorithm to search large sequence databases for cross-linked peptide assignments based on mass. This same algorithm has been used previously for assigning disulfide-bridged peptides (Choi et al., ), but has not previously been applied to cross-linking studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A Novel Arc Fault Detector for Early Detection of Electrical Fires

    PubMed Central

    Yang, Kai; Zhang, Rencheng; Yang, Jianhong; Liu, Canhua; Chen, Shouhong; Zhang, Fujiang

    2016-01-01

    Arc faults can produce very high temperatures and can easily ignite combustible materials; thus, they represent one of the most important causes of electrical fires. The application of arc fault detection, as an emerging early fire detection technology, is required by the National Electrical Code to reduce the occurrence of electrical fires. However, the concealment, randomness and diversity of arc faults make them difficult to detect. To improve the accuracy of arc fault detection, a novel arc fault detector (AFD) is developed in this study. First, an experimental arc fault platform is built to study electrical fires. A high-frequency transducer and a current transducer are used to measure typical load signals of arc faults and normal states. After the common features of these signals are studied, high-frequency energy and current variations are extracted as an input eigenvector for use by an arc fault detection algorithm. Then, the detection algorithm based on a weighted least squares support vector machine is designed and successfully applied in a microprocessor. Finally, an AFD is developed. The test results show that the AFD can detect arc faults in a timely manner and interrupt the circuit power supply before electrical fires can occur. The AFD is not influenced by cross talk or transient processes, and the detection accuracy is very high. Hence, the AFD can be installed in low-voltage circuits to monitor circuit states in real-time to facilitate the early detection of electrical fires. PMID:27070618

  9. Characterizing multiscale variability of zero intermittency in spatial rainfall

    NASA Technical Reports Server (NTRS)

    Kumar, Praveen; Foufoula-Georgiou, Efi

    1994-01-01

    In this paper the authors study how zero intermittency in spatial rainfall, as described by the fraction of area covered by rainfall, changes with spatial scale of rainfall measurement or representation. A statistical measure of intermittency that describes the size distribution of 'voids' (nonrainy areas imbedded inside rainy areas) as a function of scale is also introduced. Morphological algorithms are proposed for reconstructing rainfall intermittency at fine scales given the intermittency at coarser scales. These algorithms are envisioned to be useful in hydroclimatological studies where the rainfall spatial variability at the subgrid scale needs to be reconstructed from the results of synoptic- or mesoscale meteorological numerical models. The developed methodologies are demsonstrated and tested using data from a severe springtime midlatitude squall line and a mild midlatitude winter storm monitored by a meteorological radar in Norman, Oklahoma.

  10. [Algorithm of locally adaptive region growing based on multi-template matching applied to automated detection of hemorrhages].

    PubMed

    Gao, Wei-Wei; Shen, Jian-Xin; Wang, Yu-Liang; Liang, Chun; Zuo, Jing

    2013-02-01

    In order to automatically detect hemorrhages in fundus images, and develop an automated diabetic retinopathy screening system, a novel algorithm named locally adaptive region growing based on multi-template matching was established and studied. Firstly, spectral signature of major anatomical structures in fundus was studied, so that the right channel among RGB channels could be selected for different segmentation objects. Secondly, the fundus image was preprocessed by means of HSV brightness correction and contrast limited adaptive histogram equalization (CLAHE). Then, seeds of region growing were founded out by removing optic disc and vessel from the resulting image of normalized cross-correlation (NCC) template matching on the previous preprocessed image with several templates. Finally, locally adaptive region growing segmentation was used to find out the exact contours of hemorrhages, and the automated detection of the lesions was accomplished. The approach was tested on 90 different resolution fundus images with variable color, brightness and quality. Results suggest that the approach could fast and effectively detect hemorrhages in fundus images, and it is stable and robust. As a result, the approach can meet the clinical demands.

  11. The Principle of the Micro-Electronic Neural Bridge and a Prototype System Design.

    PubMed

    Huang, Zong-Hao; Wang, Zhi-Gong; Lu, Xiao-Ying; Li, Wen-Yuan; Zhou, Yu-Xuan; Shen, Xiao-Yan; Zhao, Xin-Tai

    2016-01-01

    The micro-electronic neural bridge (MENB) aims to rebuild lost motor function of paralyzed humans by routing movement-related signals from the brain, around the damage part in the spinal cord, to the external effectors. This study focused on the prototype system design of the MENB, including the principle of the MENB, the neural signal detecting circuit and the functional electrical stimulation (FES) circuit design, and the spike detecting and sorting algorithm. In this study, we developed a novel improved amplitude threshold spike detecting method based on variable forward difference threshold for both training and bridging phase. The discrete wavelet transform (DWT), a new level feature coefficient selection method based on Lilliefors test, and the k-means clustering method based on Mahalanobis distance were used for spike sorting. A real-time online spike detecting and sorting algorithm based on DWT and Euclidean distance was also implemented for the bridging phase. Tested by the data sets available at Caltech, in the training phase, the average sensitivity, specificity, and clustering accuracies are 99.43%, 97.83%, and 95.45%, respectively. Validated by the three-fold cross-validation method, the average sensitivity, specificity, and classification accuracy are 99.43%, 97.70%, and 96.46%, respectively.

  12. Reducing cross-sectional data using a genetic algorithm method and effects on cross-section geometry and steady-flow profiles

    USGS Publications Warehouse

    Berenbrock, Charles E.

    2015-01-01

    The effects of reduced cross-sectional data points on steady-flow profiles were also determined. Thirty-five cross sections of the original steady-flow model of the Kootenai River were used. These two methods were tested for all cross sections with each cross section resolution reduced to 10, 20 and 30 data points, that is, six tests were completed for each of the thirty-five cross sections. Generally, differences from the original water-surface elevation were smaller as the number of data points in reduced cross sections increased, but this was not always the case, especially in the braided reach. Differences were smaller for reduced cross sections developed by the genetic algorithm method than the standard algorithm method.

  13. CDRD and PNPR passive microwave precipitation retrieval algorithms: verification study over Africa and Southern Atlantic

    NASA Astrophysics Data System (ADS)

    Panegrossi, Giulia; Casella, Daniele; Cinzia Marra, Anna; Petracca, Marco; Sanò, Paolo; Dietrich, Stefano

    2015-04-01

    The ongoing NASA/JAXA Global Precipitation Measurement mission (GPM) requires the full exploitation of the complete constellation of passive microwave (PMW) radiometers orbiting around the globe for global precipitation monitoring. In this context the coherence of the estimates of precipitation using different passive microwave radiometers is a crucial need. We have developed two different passive microwave precipitation retrieval algorithms: one is the Cloud Dynamics Radiation Database algorithm (CDRD), a physically ¬based Bayesian algorithm for conically scanning radiometers (i.e., DMSP SSMIS); the other one is the Passive microwave Neural network Precipitation Retrieval (PNPR) algorithm for cross¬-track scanning radiometers (i.e., NOAA and MetOp¬A/B AMSU-¬A/MHS, and NPP Suomi ATMS). The algorithms, originally created for application over Europe and the Mediterranean basin, and used operationally within the EUMETSAT Satellite Application Facility on Support to Operational Hydrology and Water Management (H-SAF, http://hsaf.meteoam.it), have been recently modified and extended to Africa and Southern Atlantic for application to the MSG full disk area. The two algorithms are based on the same physical foundation, i.e., the same cloud-radiation model simulations as a priori information in the Bayesian solver and as training dataset in the neural network approach, and they also use similar procedures for identification of frozen background surface, detection of snowfall, and determination of a pixel based quality index of the surface precipitation retrievals. In addition, similar procedures for the screening of not ¬precipitating pixels are used. A novel algorithm for the detection of precipitation in tropical/sub-tropical areas has been developed. The precipitation detection algorithm shows a small rate of false alarms (also over arid/desert regions), a superior detection capability in comparison with other widely used screening algorithms, and it is applicable to all available PMW radiometers in the GPM constellation of satellites (including NPP Suomi ATMS, and GMI). Three years of SSMIS and AMSU/MHS data have been considered to carry out a verification study over Africa of the retrievals from the CDRD and PNPR algorithms. The precipitation products from the TRMM ¬Precipitation radar (PR) (TRMM product 2A25 and 2A23) have been used as ground truth. The results of this study aimed at assessing the accuracy of the precipitation retrievals in different climatic regions and precipitation regimes will be presented. Particular emphasis will be given to the analysis of the level of coherence of the precipitation estimates and patterns between the two algorithms exploiting different radiometers. Recent developments aimed at the full exploitation of the GPM constellation of satellites for optimal precipitation/drought monitoring will be also presented.

  14. Pre-correction of distorted Bessel-Gauss beams without wavefront detection

    NASA Astrophysics Data System (ADS)

    Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing

    2017-12-01

    By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.

  15. Look Again: An Investigation of False Positive Detections in Combat Models

    DTIC Science & Technology

    2008-06-01

    those states (Macmillan & Creelman , 1991). Denoted by d’, sensitivity is scaled between zero and one, with an infallible observer having a d’ equal to...Macmillan & Creelman , 1991), and is also scaled between zero and one. Varying either the observer’s sensitivity or bias, or both, changes his...Graphics Based Target Detection Model, Master of Science, Naval Postgraduate School, September. Macmillan, N. A., & Creelman , C. D., 1991, Detection Theory

  16. Detection and Localization of Subsurface Two-Dimensional Metallic Objects

    NASA Astrophysics Data System (ADS)

    Meschino, S.; Pajewski, L.; Schettini, G.

    2009-04-01

    "Roma Tre" University, Applied Electronics Dept.v. Vasca Navale 84, 00146 Rome, Italy Non-invasive identification of buried objects in the near-field of a receiver array is a subject of great interest, due to its application to the remote sensing of the earth's subsurface, to the detection of landmines, pipes, conduits, to the archaeological site characterization, and more. In this work, we present a Sub-Array Processing (SAP) approach for the detection and localization of subsurface perfectly-conducting circular cylinders. We consider a plane wave illuminating the region of interest, which is assumed to be a homogeneous, unlossy medium of unknown permittivity containing one or more targets. In a first step, we partition the receiver array so that the field scattered from the targets result to be locally plane at each sub-array. Then, we apply a Direction of Arrival (DOA) technique to obtain a set of angles for each locally plane wave, and triangulate these directions obtaining a collection of crossing crowding in the expected object locations [1]. We compare several DOA algorithms such as the traditional Bartlett and Capon Beamforming, the Pisarenko Harmonic Decomposition (PHD), the Minimum-Norm method, the Multiple Signal Classification (MUSIC) and the Estimation of Signal Parameters via Rotational Techinque (ESPRIT) [2]. In a second stage, we develop a statistical Poisson based model to manage the crossing pattern in order to extract the probable target's centre position. In particular, if the crossings are Poisson distributed, it is possible to feature two different distribution parameters [3]. These two parameters perform two density rate for the crossings, so that we can previously divide the crossing pattern in a certain number of equal-size windows and we can collect the windows of the crossing pattern with low rate parameters (that probably are background windows) and remove them. In this way we can consider only the high rate parameter windows (that most probably locate the target) and extract the center position of the object. We also consider some other localization-connected aspects. For example how to obtain a likely estimation of the soil permittivity and of the cylinders radius. Finally, when multiple objects are present, we refine our localization procedure by performing a Clustering Analysis of the crossing pattern. In particular, we apply the K-means algorithm to extract the coordinates of the objects centroids and the clusters extension. References [1] Şahin A., Miller L., "Object Detection Using High Resolution Near-Field Array Processing", IEEE Trans. on Geoscience and Remote Sensing, vol.39, no.1, Jan. 2001, pp. 136-141. [2] Gross F.B., "Smart Antennas for Wireless Communications", Mc.Graw-Hill 2005. [3] Hoaglin D.C., "A Poisonnes Plot", The American Statistician, vol.34, no.3 August 1980, pp.146-149.

  17. Clinical study of quantitative diagnosis of early cervical cancer based on the classification of acetowhitening kinetics

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Cheung, Tak-Hong; Yim, So-Fan; Qu, Jianan Y.

    2010-03-01

    A quantitative colposcopic imaging system for the diagnosis of early cervical cancer is evaluated in a clinical study. This imaging technology based on 3-D active stereo vision and motion tracking extracts diagnostic information from the kinetics of acetowhitening process measured from the cervix of human subjects in vivo. Acetowhitening kinetics measured from 137 cervical sites of 57 subjects are analyzed and classified using multivariate statistical algorithms. Cross-validation methods are used to evaluate the performance of the diagnostic algorithms. The results show that an algorithm for screening precancer produced 95% sensitivity (SE) and 96% specificity (SP) for discriminating normal and human papillomavirus (HPV)-infected tissues from cervical intraepithelial neoplasia (CIN) lesions. For a diagnostic algorithm, 91% SE and 90% SP are achieved for discriminating normal tissue, HPV infected tissue, and low-grade CIN lesions from high-grade CIN lesions. The results demonstrate that the quantitative colposcopic imaging system could provide objective screening and diagnostic information for early detection of cervical cancer.

  18. Distortion correction and cross-talk compensation algorithm for use with an imaging spectrometer based spatially resolved diffuse reflectance system

    NASA Astrophysics Data System (ADS)

    Cappon, Derek J.; Farrell, Thomas J.; Fang, Qiyin; Hayward, Joseph E.

    2016-12-01

    Optical spectroscopy of human tissue has been widely applied within the field of biomedical optics to allow rapid, in vivo characterization and analysis of the tissue. When designing an instrument of this type, an imaging spectrometer is often employed to allow for simultaneous analysis of distinct signals. This is especially important when performing spatially resolved diffuse reflectance spectroscopy. In this article, an algorithm is presented that allows for the automated processing of 2-dimensional images acquired from an imaging spectrometer. The algorithm automatically defines distinct spectrometer tracks and adaptively compensates for distortion introduced by optical components in the imaging chain. Crosstalk resulting from the overlap of adjacent spectrometer tracks in the image is detected and subtracted from each signal. The algorithm's performance is demonstrated in the processing of spatially resolved diffuse reflectance spectra recovered from an Intralipid and ink liquid phantom and is shown to increase the range of wavelengths over which usable data can be recovered.

  19. Surgical motion characterization in simulated needle insertion procedures

    NASA Astrophysics Data System (ADS)

    Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor

    2012-02-01

    PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.

  20. Privacy protection versus cluster detection in spatial epidemiology.

    PubMed

    Olson, Karen L; Grannis, Shaun J; Mandl, Kenneth D

    2006-11-01

    Patient data that includes precise locations can reveal patients' identities, whereas data aggregated into administrative regions may preserve privacy and confidentiality. We investigated the effect of varying degrees of address precision (exact latitude and longitude vs the center points of zip code or census tracts) on detection of spatial clusters of cases. We simulated disease outbreaks by adding supplementary spatially clustered emergency department visits to authentic hospital emergency department syndromic surveillance data. We identified clusters with a spatial scan statistic and evaluated detection rate and accuracy. More clusters were identified, and clusters were more accurately detected, when exact locations were used. That is, these clusters contained at least half of the simulated points and involved few additional emergency department visits. These results were especially apparent when the synthetic clustered points crossed administrative boundaries and fell into multiple zip code or census tracts. The spatial cluster detection algorithm performed better when addresses were analyzed as exact locations than when they were analyzed as center points of zip code or census tracts, particularly when the clustered points crossed administrative boundaries. Use of precise addresses offers improved performance, but this practice must be weighed against privacy concerns in the establishment of public health data exchange policies.

  1. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  2. Non-Linear Cosmological Power Spectra in Real and Redshift Space

    NASA Technical Reports Server (NTRS)

    Taylor, A. N.; Hamilton, A. J. S.

    1996-01-01

    We present an expression for the non-linear evolution of the cosmological power spectrum based on Lagrangian trajectories. This is simplified using the Zel'dovich approximation to trace particle displacements, assuming Gaussian initial conditions. The model is found to exhibit the transfer of power from large to small scales expected in self-gravitating fields. Some exact solutions are found for power-law initial spectra. We have extended this analysis into red-shift space and found a solution for the non-linear, anisotropic redshift-space power spectrum in the limit of plane-parallel redshift distortions. The quadrupole-to-monopole ratio is calculated for the case of power-law initial spectra. We find that the shape of this ratio depends on the shape of the initial spectrum, but when scaled to linear theory depends only weakly on the redshift-space distortion parameter, beta. The point of zero-crossing of the quadrupole, kappa(sub o), is found to obey a simple scaling relation and we calculate this scale in the Zel'dovich approximation. This model is found to be in good agreement with a series of N-body simulations on scales down to the zero-crossing of the quadrupole, although the wavenumber at zero-crossing is underestimated. These results are applied to the quadrupole-to-monopole ratio found in the merged QDOT plus 1.2-Jy-IRAS redshift survey. Using a likelihood technique we have estimated that the distortion parameter is constrained to be beta greater than 0.5 at the 95 percent level. Our results are fairly insensitive to the local primordial spectral slope, but the likelihood analysis suggests n = -2 un the translinear regime. The zero-crossing scale of the quadrupole is k(sub 0) = 0.5 +/- 0.1 h Mpc(exp -1) and from this we infer that the amplitude of clustering is sigma(sub 8) = 0.7 +/- 0.05. We suggest that the success of this model is due to non-linear redshift-space effects arising from infall on to caustic and is not dominated by virialized cluster cores. The latter should start to dominate on scales below the zero-crossing of the quadrupole, where our model breaks down.

  3. Zero-Time Renal Transplant Biopsies: A Comprehensive Review.

    PubMed

    Naesens, Maarten

    2016-07-01

    Zero-time kidney biopsies, obtained at time of transplantation, are performed in many transplant centers worldwide. Decisions on kidney discard, kidney allocation, and choice of peritransplant and posttransplant treatment are sometimes based on the histological information obtained from these biopsies. This comprehensive review evaluates the practical considerations of performing zero-time biopsies, the predictive performance of zero-time histology and composite histological scores, and the clinical utility of these biopsies. The predictive performance of individual histological lesions and of composite scores for posttransplant outcome is at best moderate. No single histological lesion or composite score is sufficiently robust to be included in algorithms for kidney discard. Dual kidney transplantation has been based on histological assessment of zero-time biopsies and improves outcome in individual patients, but the waitlist effects of this strategy remain obscure. Zero-time biopsies are valuable for clinical and translational research purposes, providing insight in risk factors for posttransplant events, and as baseline for comparison with posttransplant histology. The molecular phenotype of zero-time biopsies yields novel therapeutic targets for improvement of donor selection, peritransplant management and kidney preservation. It remains however highly unclear whether the molecular expression variation in zero-time biopsies could become a better predictor for posttransplant outcome than donor/recipient baseline demographic factors.

  4. Fidelity-Based Ant Colony Algorithm with Q-learning of Quantum System

    NASA Astrophysics Data System (ADS)

    Liao, Qin; Guo, Ying; Tu, Yifeng; Zhang, Hang

    2018-03-01

    Quantum ant colony algorithm (ACA) has potential applications in quantum information processing, such as solutions of traveling salesman problem, zero-one knapsack problem, robot route planning problem, and so on. To shorten the search time of the ACA, we suggest the fidelity-based ant colony algorithm (FACA) for the control of quantum system. Motivated by structure of the Q-learning algorithm, we demonstrate the combination of a FACA with the Q-learning algorithm and suggest the design of a fidelity-based ant colony algorithm with the Q-learning to improve the performance of the FACA in a spin-1/2 quantum system. The numeric simulation results show that the FACA with the Q-learning can efficiently avoid trapping into local optimal policies and increase the speed of convergence process of quantum system.

  5. Fidelity-Based Ant Colony Algorithm with Q-learning of Quantum System

    NASA Astrophysics Data System (ADS)

    Liao, Qin; Guo, Ying; Tu, Yifeng; Zhang, Hang

    2017-12-01

    Quantum ant colony algorithm (ACA) has potential applications in quantum information processing, such as solutions of traveling salesman problem, zero-one knapsack problem, robot route planning problem, and so on. To shorten the search time of the ACA, we suggest the fidelity-based ant colony algorithm (FACA) for the control of quantum system. Motivated by structure of the Q-learning algorithm, we demonstrate the combination of a FACA with the Q-learning algorithm and suggest the design of a fidelity-based ant colony algorithm with the Q-learning to improve the performance of the FACA in a spin-1/2 quantum system. The numeric simulation results show that the FACA with the Q-learning can efficiently avoid trapping into local optimal policies and increase the speed of convergence process of quantum system.

  6. Designing clutter rejection filters with complex coefficients for airborne pulsed Doppler weather radar

    NASA Technical Reports Server (NTRS)

    Jamora, Dennis A.

    1993-01-01

    Ground clutter interference is a major problem for airborne pulse Doppler radar operating at low altitudes in a look-down mode. With Doppler zero set at the aircraft ground speed, ground clutter rejection filtering is typically accomplished using a high-pass filter with real valued coefficients and a stopband notch centered at zero Doppler. Clutter spectra from the NASA Wind Shear Flight Experiments of l991-1992 show that the dominant clutter mode can be located away from zero Doppler, particularly at short ranges dominated by sidelobe returns. Use of digital notch filters with complex valued coefficients so that the stopband notch can be located at any Doppler frequency is investigated. Several clutter mode tracking algorithms are considered to estimate the Doppler frequency location of the dominant clutter mode. From the examination of night data, when a dominant clutter mode away from zero Doppler is present, complex filtering is able to significantly increase clutter rejection over use of a notch filter centered at zero Doppler.

  7. Radiometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strickland, J. I.

    1985-07-02

    A radiometer of the switched type has an R.F. switch connecting a detector selectively either to an antenna whose temperature (in terms of noise energy) is to be determined, or to a reference temperature, i.e. a resistive termination. The detector output is passed through an amplifier whose gain is switched between positive and negative values (for example +1 and -1) synchronously with the R.F. switch. The output of the switched gain amplifier is integrated to produce a rising voltage when the gain is positive and a falling one when it is negative. When it is positive the detector is connectedmore » to the antenna. By means of a zero crossing detector, a counter is started when this voltage crosses zero. After a fixed period, the R.F. switch and switched gain amplifier are reversed by the counter to cause the voltage to fall in accordance with the temperature of the resistive termination. The zero crossing detector and a counter measure the time interval until the voltage again crosses zero, such time interval being compared to the fixed period to provide a comparison of the unknown and reference temperatures independent of the gain of the detector, which is a valuable improvement over prior radiometers. Also, by measuring time rather than voltage, the arrangement facilitates providing a digital output more suitable for storage and transmission of the data than the analog output of prior radiometers. The instrument, which is relatively simple, rugged and compact, lends itself well to unattended use in monitoring the effect of rain storms on transmission in the 11.7 to 12.2 GHz band employed for satelite communication.« less

  8. 2D Sub-Pixel Disparity Measurement Using QPEC / Medicis

    NASA Astrophysics Data System (ADS)

    Cournet, M.; Giros, A.; Dumas, L.; Delvit, J. M.; Greslou, D.; Languille, F.; Blanchet, G.; May, S.; Michel, J.

    2016-06-01

    In the frame of its earth observation missions, CNES created a library called QPEC, and one of its launcher called Medicis. QPEC / Medicis is a sub-pixel two-dimensional stereo matching algorithm that works on an image pair. This tool is a block matching algorithm, which means that it is based on a local method. Moreover it does not regularize the results found. It proposes several matching costs, such as the Zero mean Normalised Cross-Correlation or statistical measures (the Mutual Information being one of them), and different match validation flags. QPEC / Medicis is able to compute a two-dimensional dense disparity map with a subpixel precision. Hence, it is more versatile than disparity estimation methods found in computer vision literature, which often assume an epipolar geometry. CNES uses Medicis, among other applications, during the in-orbit image quality commissioning of earth observation satellites. For instance the Pléiades-HR 1A & 1B and the Sentinel-2 geometric calibrations are based on this block matching algorithm. Over the years, it has become a common tool in ground segments for in-flight monitoring purposes. For these two kinds of applications, the two-dimensional search and the local sub-pixel measure without regularization can be essential. This tool is also used to generate automatic digital elevation models, for which it was not initially dedicated. This paper deals with the QPEC / Medicis algorithm. It also presents some of its CNES applications (in-orbit commissioning, in flight monitoring or digital elevation model generation). Medicis software is distributed outside the CNES as well. This paper finally describes some of these external applications using Medicis, such as ground displacement measurement, or intra-oral scanner in the dental domain.

  9. Automated bow shock and radiation belt edge identification methods and their application for Cluster, THEMIS/ARTEMIS and Van Allen Probes data

    NASA Astrophysics Data System (ADS)

    Facsko, Gabor; Sibeck, David; Balogh, Tamas; Kis, Arpad; Wesztergom, Viktor

    2017-04-01

    The bow shock and the outer rim of the outer radiation belt are detected automatically by our algorithm developed as a part of the Boundary Layer Identification Code Cluster Active Archive project. The radiation belt positions are determined from energized electron measurements working properly onboard all Cluster spacecraft. For bow shock identification we use magnetometer data and, when available, ion plasma instrument data. In addition, electrostatic wave instrument electron density, spacecraft potential measurements and wake indicator auxiliary data are also used so the events can be identified by all Cluster probes in highly redundant way, as the magnetometer and these instruments are still operational in all spacecraft. The capability and performance of the bow shock identification algorithm were tested using known bow shock crossing determined manually from January 29, 2002 to February 3,. The verification enabled 70% of the bow shock crossings to be identified automatically. The method shows high flexibility and it can be applied to observations from various spacecraft. Now these tools have been applied to Time History of Events and Macroscale Interactions during Substorms (THEMIS)/Acceleration, Reconnection, Turbulence, and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) magnetic field, plasma and spacecraft potential observations to identify bow shock crossings; and to Van Allen Probes supra-thermal electron observations to identify the edges of the radiation belt. The outcomes of the algorithms are checked manually and the parameters used to search for bow shock identification are refined.

  10. On the zero-crossing of the three-gluon Green's function from lattice simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athenodorou, Andreas; Boucaud, Philippe; de Soto, Feliciano

    We report on some efforts recently made in order to gain a better understanding of some IR properties of the 3-point gluon Green’s function by exploiting results from large-volume quenched lattice simulations. These lattice results have been obtained by using both tree-level Symanzik and the standard Wilson action, in the aim of assessing the possible impact of effects presumably resulting from a particular choice for the discretization of the action. The main resulting feature is the existence of a negative log-aritmic divergence at zero-momentum, which pulls the 3-gluon form factors down at low momenta and, consequently, yields a zero-crossing atmore » a given deep IR momentum. The results can be correctly explained by analyzing the relevant Dyson-Schwinger equations and appropriate truncation schemes.« less

  11. Impurity bound states in mesoscopic topological superconducting loops

    NASA Astrophysics Data System (ADS)

    Jin, Yan-Yan; Zha, Guo-Qiao; Zhou, Shi-Ping

    2018-06-01

    We study numerically the effect induced by magnetic impurities in topological s-wave superconducting loops with spin-orbit interaction based on spin-generalized Bogoliubov-de Gennes equations. In the case of a single magnetic impurity, it is found that the midgap bound states can cross the Fermi level at an appropriate impurity strength and the circulating spin current jumps at the crossing point. The evolution of the zero-energy mode can be effectively tuned by the located site of a single magnetic impurity. For the effect of many magnetic impurities, two independent midway or edge impurities cannot lead to the overlap of zero modes. The multiple zero-energy modes can be effectively realized by embedding a single Josephson junction with impurity scattering into the system, and the spin current displays oscillatory feature with increasing the layer thickness.

  12. Tsunami Detection by High-Frequency Radar Beyond the Continental Shelf

    NASA Astrophysics Data System (ADS)

    Grilli, Stéphan T.; Grosdidier, Samuel; Guérin, Charles-Antoine

    2016-12-01

    Where coastal tsunami hazard is governed by near-field sources, such as submarine mass failures or meteo-tsunamis, tsunami propagation times may be too small for a detection based on deep or shallow water buoys. To offer sufficient warning time, it has been proposed to implement early warning systems relying on high-frequency (HF) radar remote sensing, that can provide a dense spatial coverage as far offshore as 200-300 km (e.g., for Diginext Ltd.'s Stradivarius radar). Shore-based HF radars have been used to measure nearshore currents (e.g., CODAR SeaSonde® system; http://www.codar.com/), by inverting the Doppler spectral shifts, these cause on ocean waves at the Bragg frequency. Both modeling work and an analysis of radar data following the Tohoku 2011 tsunami, have shown that, given proper detection algorithms, such radars could be used to detect tsunami-induced currents and issue a warning. However, long wave physics is such that tsunami currents will only rise above noise and background currents (i.e., be at least 10-15 cm/s), and become detectable, in fairly shallow water which would limit the direct detection of tsunami currents by HF radar to nearshore areas, unless there is a very wide shallow shelf. Here, we use numerical simulations of both HF radar remote sensing and tsunami propagation to develop and validate a new type of tsunami detection algorithm that does not have these limitations. To simulate the radar backscattered signal, we develop a numerical model including second-order effects in both wind waves and radar signal, with the wave angular frequency being modulated by a time-varying surface current, combining tsunami and background currents. In each "radar cell", the model represents wind waves with random phases and amplitudes extracted from a specified (wind speed dependent) energy density frequency spectrum, and includes effects of random environmental noise and background current; phases, noise, and background current are extracted from independent Gaussian distributions. The principle of the new algorithm is to compute correlations of HF radar signals measured/simulated in many pairs of distant "cells" located along the same tsunami wave ray, shifted in time by the tsunami propagation time between these cell locations; both rays and travel time are easily obtained as a function of long wave phase speed and local bathymetry. It is expected that, in the presence of a tsunami current, correlations computed as a function of range and an additional time lag will show a narrow elevated peak near the zero time lag, whereas no pattern in correlation will be observed in the absence of a tsunami current; this is because surface waves and background current are uncorrelated between pair of cells, particularly when time-shifted by the long-wave propagation time. This change in correlation pattern can be used as a threshold for tsunami detection. To validate the algorithm, we first identify key features of tsunami propagation in the Western Mediterranean Basin, where Stradivarius is deployed, by way of direct numerical simulations with a long wave model. Then, for the purpose of validating the algorithm we only model HF radar detection for idealized tsunami wave trains and bathymetry, but verify that such idealized case studies capture well the salient tsunami wave physics. Results show that, in the presence of strong background currents, the proposed method still allows detecting a tsunami with currents as low as 0.05 m/s, whereas a standard direct inversion based on radar signal Doppler spectra fails to reproduce tsunami currents weaker than 0.15-0.2 m/s. Hence, the new algorithm allows detecting tsunami arrival in deeper water, beyond the shelf and further away from the coast, and providing an early warning. Because the standard detection of tsunami currents works well at short range, we envision that, in a field situation, the new algorithm could complement the standard approach of direct near-field detection by providing a warning that a tsunami is approaching, at larger range and in greater depth. This warning would then be confirmed at shorter range by a direct inversion of tsunami currents, from which the magnitude of the tsunami would also estimated. Hence, both algorithms would be complementary. In future work, the algorithm will be applied to actual tsunami case studies performed using a state-of-the-art long wave model, such as briefly presented here in the Mediterranean Basin.

  13. Automated detection and cataloging of global explosive volcanism using the International Monitoring System infrasound network

    NASA Astrophysics Data System (ADS)

    Matoza, Robin S.; Green, David N.; Le Pichon, Alexis; Shearer, Peter M.; Fee, David; Mialle, Pierrick; Ceranna, Lars

    2017-04-01

    We experiment with a new method to search systematically through multiyear data from the International Monitoring System (IMS) infrasound network to identify explosive volcanic eruption signals originating anywhere on Earth. Detecting, quantifying, and cataloging the global occurrence of explosive volcanism helps toward several goals in Earth sciences and has direct applications in volcanic hazard mitigation. We combine infrasound signal association across multiple stations with source location using a brute-force, grid-search, cross-bearings approach. The algorithm corrects for a background prior rate of coherent unwanted infrasound signals (clutter) in a global grid, without needing to screen array processing detection lists from individual stations prior to association. We develop the algorithm using case studies of explosive eruptions: 2008 Kasatochi, Alaska; 2009 Sarychev Peak, Kurile Islands; and 2010 Eyjafjallajökull, Iceland. We apply the method to global IMS infrasound data from 2005-2010 to construct a preliminary acoustic catalog that emphasizes sustained explosive volcanic activity (long-duration signals or sequences of impulsive transients lasting hours to days). This work represents a step toward the goal of integrating IMS infrasound data products into global volcanic eruption early warning and notification systems. Additionally, a better understanding of volcanic signal detection and location with the IMS helps improve operational event detection, discrimination, and association capabilities.

  14. Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan

    2017-03-01

    Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.

  15. Rethinking the advantage of zero-HLA mismatches in unrelated living donor kidney transplantation: implications on kidney paired donation.

    PubMed

    Casey, Michael Jin; Wen, Xuerong; Rehman, Shehzad; Santos, Alfonso H; Andreoni, Kenneth A

    2015-04-01

    The OPTN/UNOS Kidney Paired Donation (KPD) Pilot Program allocates priority to zero-HLA mismatches. However, in unrelated living donor kidney transplants (LDKT)-the same donor source in KPD-no study has shown whether zero-HLA mismatches provide any advantage over >0 HLA mismatches. We hypothesize that zero-HLA mismatches among unrelated LDKT do not benefit graft survival. This retrospective SRTR database study analyzed LDKT recipients from 1987 to 2012. Among unrelated LDKT, subjects with zero-HLA mismatches were compared to a 1:1-5 matched (by donor age ±1 year and year of transplantation) control cohort with >0 HLA mismatches. The primary endpoint was death-censored graft survival. Among 32,654 unrelated LDKT recipients, 83 had zero-HLA mismatches and were matched to 407 controls with >0 HLA mismatches. Kaplan-Meier analyses for death-censored graft and patient survival showed no difference between study and control cohorts. In multivariate marginal Cox models, zero-HLA mismatches saw no benefit with death-censored graft survival (HR = 1.46, 95% CI 0.78-2.73) or patient survival (HR = 1.43, 95% CI 0.68-3.01). Our data suggest that in unrelated LDKT, zero-HLA mismatches may not offer any survival advantage. Therefore, particular study of zero-HLA mismatching is needed to validate its place in the OPTN/UNOS KPD Pilot Program allocation algorithm. © 2014 Steunstichting ESOT.

  16. Analysis of dead zone sources in a closed-loop fiber optic gyroscope.

    PubMed

    Chong, Kyoung-Ho; Choi, Woo-Seok; Chong, Kil-To

    2016-01-01

    Analysis of the dead zone is among the intensive studies in a closed-loop fiber optic gyroscope. In a dead zone, a gyroscope cannot detect any rotation and produces a zero bias. In this study, an analysis of dead zone sources is performed in simulation and experiments. In general, the problem is mainly due to electrical cross coupling and phase modulation drift. Electrical cross coupling is caused by interference between modulation voltage and the photodetector. The cross-coupled signal produces spurious gyro bias and leads to a dead zone if it is larger than the input rate. Phase modulation drift as another dead zone source is due to the electrode contamination, the piezoelectric effect of the LiNbO3 substrate, or to organic fouling. This modulation drift lasts for a short or long period of time like a lead-lag filter response and produces gyro bias error, noise spikes, or dead zone. For a more detailed analysis, the cross-coupling effect and modulation phase drift are modeled as a filter and are simulated in both the open-loop and closed-loop modes. The sources of dead zone are more clearly analyzed in the simulation and experimental results.

  17. A method of immediate detection of objects with a near-zero apparent motion in series of CCD-frames

    NASA Astrophysics Data System (ADS)

    Savanevych, V. E.; Khlamov, S. V.; Vavilova, I. B.; Briukhovetskyi, A. B.; Pohorelov, A. V.; Mkrtichian, D. E.; Kudak, V. I.; Pakuliak, L. K.; Dikov, E. N.; Melnik, R. G.; Vlasenko, V. P.; Reichart, D. E.

    2018-01-01

    The paper deals with a computational method for detection of the solar system minor bodies (SSOs), whose inter-frame shifts in series of CCD-frames during the observation are commensurate with the errors in measuring their positions. These objects have velocities of apparent motion between CCD-frames not exceeding three rms errors (3σ) of measurements of their positions. About 15% of objects have a near-zero apparent motion in CCD-frames, including the objects beyond the Jupiter's orbit as well as the asteroids heading straight to the Earth. The proposed method for detection of the object's near-zero apparent motion in series of CCD-frames is based on the Fisher f-criterion instead of using the traditional decision rules that are based on the maximum likelihood criterion. We analyzed the quality indicators of detection of the object's near-zero apparent motion applying statistical and in situ modeling techniques in terms of the conditional probability of the true detection of objects with a near-zero apparent motion. The efficiency of method being implemented as a plugin for the Collection Light Technology (CoLiTec) software for automated asteroids and comets detection has been demonstrated. Among the objects discovered with this plugin, there was the sungrazing comet C/2012 S1 (ISON). Within 26 min of the observation, the comet's image has been moved by three pixels in a series of four CCD-frames (the velocity of its apparent motion at the moment of discovery was equal to 0.8 pixels per CCD-frame; the image size on the frame was about five pixels). Next verification in observations of asteroids with a near-zero apparent motion conducted with small telescopes has confirmed an efficiency of the method even in bad conditions (strong backlight from the full Moon). So, we recommend applying the proposed method for series of observations with four or more frames.

  18. Focusing ISAR Images using Fast Adaptive Time-Frequency and 3D Motion Detection on Simulated and Experimental Radar Data

    DTIC Science & Technology

    2005-06-01

    Time Fourier Transform WVD Wigner - Ville Distribution GA Genetic Algorithm PSO Particle Swarm Optimization JEM Jet Engine Modulation CPI...of the Wigner - Ville Distribution ( WVD ), cross-terms appear in the time-frequency image. As shown in Figure 9, which is a WVD of range bin 31 of...14 Figure 9. Wigner - Ville Distribution of Unfocused Range Bin 31 (After [3] and [5].) ...15

  19. A statistical model of false negative and false positive detection of phase singularities.

    PubMed

    Jacquemet, Vincent

    2017-10-01

    The complexity of cardiac fibrillation dynamics can be assessed by analyzing the distribution of phase singularities (PSs) observed using mapping systems. Interelectrode distance, however, limits the accuracy of PS detection. To investigate in a theoretical framework the PS false negative and false positive rates in relation to the characteristics of the mapping system and fibrillation dynamics, we propose a statistical model of phase maps with controllable number and locations of PSs. In this model, phase maps are generated from randomly distributed PSs with physiologically-plausible directions of rotation. Noise and distortion of the phase are added. PSs are detected using topological charge contour integrals on regular grids of varying resolutions. Over 100 × 10 6 realizations of the random field process are used to estimate average false negative and false positive rates using a Monte-Carlo approach. The false detection rates are shown to depend on the average distance between neighboring PSs expressed in units of interelectrode distance, following approximately a power law with exponents in the range of 1.14 to 2 for false negatives and around 2.8 for false positives. In the presence of noise or distortion of phase, false detection rates at high resolution tend to a non-zero noise-dependent lower bound. This model provides an easy-to-implement tool for benchmarking PS detection algorithms over a broad range of configurations with multiple PSs.

  20. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms

    PubMed Central

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831

  1. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.

    PubMed

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.

  2. Chaotic CDMA watermarking algorithm for digital image in FRFT domain

    NASA Astrophysics Data System (ADS)

    Liu, Weizhong; Yang, Wentao; Feng, Zhuoming; Zou, Xuecheng

    2007-11-01

    A digital image-watermarking algorithm based on fractional Fourier transform (FRFT) domain is presented by utilizing chaotic CDMA technique in this paper. As a popular and typical transmission technique, CDMA has many advantages such as privacy, anti-jamming and low power spectral density, which can provide robustness against image distortions and malicious attempts to remove or tamper with the watermark. A super-hybrid chaotic map, with good auto-correlation and cross-correlation characteristics, is adopted to produce many quasi-orthogonal codes (QOC) that can replace the periodic PN-code used in traditional CDAM system. The watermarking data is divided into a lot of segments that correspond to different chaotic QOC respectively and are modulated into the CDMA watermarking data embedded into low-frequency amplitude coefficients of FRFT domain of the cover image. During watermark detection, each chaotic QOC extracts its corresponding watermarking segment by calculating correlation coefficients between chaotic QOC and watermarked data of the detected image. The CDMA technique not only can enhance the robustness of watermark but also can compress the data of the modulated watermark. Experimental results show that the watermarking algorithm has good performances in three aspects: better imperceptibility, anti-attack robustness and security.

  3. High density DNA microarrays: algorithms and biomedical applications.

    PubMed

    Liu, Wei-Min

    2004-08-01

    DNA microarrays are devices capable of detecting the identity and abundance of numerous DNA or RNA segments in samples. They are used for analyzing gene expressions, identifying genetic markers and detecting mutations on a genomic scale. The fundamental chemical mechanism of DNA microarrays is the hybridization between probes and targets due to the hydrogen bonds of nucleotide base pairing. Since the cross hybridization is inevitable, and probes or targets may form undesirable secondary or tertiary structures, the microarray data contain noise and depend on experimental conditions. It is crucial to apply proper statistical algorithms to obtain useful signals from noisy data. After we obtained the signals of a large amount of probes, we need to derive the biomedical information such as the existence of a transcript in a cell, the difference of expression levels of a gene in multiple samples, and the type of a genetic marker. Furthermore, after the expression levels of thousands of genes or the genotypes of thousands of single nucleotide polymorphisms are determined, it is usually important to find a small number of genes or markers that are related to a disease, individual reactions to drugs, or other phenotypes. All these applications need careful data analyses and reliable algorithms.

  4. Automatic detection of atrial fibrillation in cardiac vibration signals.

    PubMed

    Brueser, C; Diesel, J; Zink, M D H; Winter, S; Schauerte, P; Leonhardt, S

    2013-01-01

    We present a study on the feasibility of the automatic detection of atrial fibrillation (AF) from cardiac vibration signals (ballistocardiograms/BCGs) recorded by unobtrusive bedmounted sensors. The proposed system is intended as a screening and monitoring tool in home-healthcare applications and not as a replacement for ECG-based methods used in clinical environments. Based on BCG data recorded in a study with 10 AF patients, we evaluate and rank seven popular machine learning algorithms (naive Bayes, linear and quadratic discriminant analysis, support vector machines, random forests as well as bagged and boosted trees) for their performance in separating 30 s long BCG epochs into one of three classes: sinus rhythm, atrial fibrillation, and artifact. For each algorithm, feature subsets of a set of statistical time-frequency-domain and time-domain features were selected based on the mutual information between features and class labels as well as first- and second-order interactions among features. The classifiers were evaluated on a set of 856 epochs by means of 10-fold cross-validation. The best algorithm (random forests) achieved a Matthews correlation coefficient, mean sensitivity, and mean specificity of 0.921, 0.938, and 0.982, respectively.

  5. More about solar g modes

    NASA Astrophysics Data System (ADS)

    Fossat, E.; Schmider, F. X.

    2018-04-01

    Context. The detection of asymptotic solar g-mode parameters was the main goal of the GOLF instrument onboard the SOHO space observatory. This detection has recently been reported and has identified a rapid mean rotation of the solar core, with a one-week period, nearly four times faster than all the rest of the solar body, from the surface to the bottom of the radiative zone. Aim. We present here the detection of more g modes of higher degree, and a more precise estimation of all their parameters, which will have to be exploited as additional constraints in modeling the solar core. Methods: Having identified the period equidistance and the splitting of a large number of asymptotic g modes of degrees 1 and 2, we test a model of frequencies of these modes by a cross-correlation with the power spectrum from which they have been detected. It shows a high correlation peak at lag zero, showing that the model is hidden but present in the real spectrum. The model parameters can then be adjusted to optimize the position (at exactly zero lag) and the height of this correlation peak. The same method is then extended to the search for modes of degrees 3 and 4, which were not detected in the previous analysis. Results: g-mode parameters are optimally measured in similar-frequency bandwidths, ranging from 7 to 8 μHz at one end and all close to 30 μHz at the other end, for the degrees 1 to 4. They include the four asymptotic period equidistances, the slight departure from equidistance of the detected periods for l = 1 and l = 2, the measured amplitudes, functions of the degree and the tesseral order, and the splittings that will possibly constrain the estimated sharpness of the transition between the one-week mean rotation of the core and the almost four-week rotation of the radiative envelope. The g-mode periods themselves are crucial inputs in the solar core structure helioseismic investigation.

  6. Effect of thermal modification on rheological properties of polyethylene blends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siriprumpoonthum, Monchai; Nobukawa, Shogo; Yamaguchi, Masayuki, E-mail: m-yama@jaist.ac.jp

    2014-03-15

    We examined the effects of thermal modification under flow field on the rheological properties of linear low-density polyethylene (LLDPE) with high molecular weight, low-density polyethylene (LDPE), and their blends, without thermal stabilizer. Although structural changes during processing are not detected by size extrusion chromatography or nuclear magnetic resonance spectroscopy, linear viscoelastic properties changed greatly, especially for the LLDPE. A cross-linking reaction took place, leading to, presumably, star-shaped long-chain branches. Consequently, the modified LLDPE, having high zero-shear viscosity, became a thermorheologically complex melt. Moreover, it should be noted that the drawdown force, defined as the uniaxial elongational force at a constantmore » draw ratio, was significantly enhanced for the blends. Enhancement of elongational viscosity was also detected. The drawdown force and elongational viscosity are marked for the thermally modified blend as compared with those for the blend of thermally modified pure components. Intermolecular cross-linking reactions between LDPE and LLDPE, yielding polymers with more than two branch points per chain, result in marked strain-hardening in the elongational viscosity behavior even at small strain. The recovery curve of the oscillatory modulus after the shear modification is further evidence of a branched structure.« less

  7. Mitigating fluorescence spectral overlap in wide-field endoscopic imaging

    PubMed Central

    Hou, Vivian; Nelson, Leonard Y.; Seibel, Eric J.

    2013-01-01

    Abstract. The number of molecular species suitable for multispectral fluorescence imaging is limited due to the overlap of the emission spectra of indicator fluorophores, e.g., dyes and nanoparticles. To remove fluorophore emission cross-talk in wide-field multispectral fluorescence molecular imaging, we evaluate three different solutions: (1) image stitching, (2) concurrent imaging with cross-talk ratio subtraction algorithm, and (3) frame-sequential imaging. A phantom with fluorophore emission cross-talk is fabricated, and a 1.2-mm ultrathin scanning fiber endoscope (SFE) is used to test and compare these approaches. Results show that fluorophore emission cross-talk could be successfully avoided or significantly reduced. Near term, the concurrent imaging method of wide-field multispectral fluorescence SFE is viable for early stage cancer detection and localization in vivo. Furthermore, a means to enhance exogenous fluorescence target-to-background ratio by the reduction of tissue autofluorescence background is demonstrated. PMID:23966226

  8. Features of Cross-Correlation Analysis in a Data-Driven Approach for Structural Damage Assessment

    PubMed Central

    Camacho Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis

    2018-01-01

    This work discusses the advantage of using cross-correlation analysis in a data-driven approach based on principal component analysis (PCA) and piezodiagnostics to obtain successful diagnosis of events in structural health monitoring (SHM). In this sense, the identification of noisy data and outliers, as well as the management of data cleansing stages can be facilitated through the implementation of a preprocessing stage based on cross-correlation functions. Additionally, this work evidences an improvement in damage detection when the cross-correlation is included as part of the whole damage assessment approach. The proposed methodology is validated by processing data measurements from piezoelectric devices (PZT), which are used in a piezodiagnostics approach based on PCA and baseline modeling. Thus, the influence of cross-correlation analysis used in the preprocessing stage is evaluated for damage detection by means of statistical plots and self-organizing maps. Three laboratory specimens were used as test structures in order to demonstrate the validity of the methodology: (i) a carbon steel pipe section with leak and mass damage types, (ii) an aircraft wing specimen, and (iii) a blade of a commercial aircraft turbine, where damages are specified as mass-added. As the main concluding remark, the suitability of cross-correlation features combined with a PCA-based piezodiagnostic approach in order to achieve a more robust damage assessment algorithm is verified for SHM tasks. PMID:29762505

  9. Features of Cross-Correlation Analysis in a Data-Driven Approach for Structural Damage Assessment.

    PubMed

    Camacho Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Quiroga, Jabid

    2018-05-15

    This work discusses the advantage of using cross-correlation analysis in a data-driven approach based on principal component analysis (PCA) and piezodiagnostics to obtain successful diagnosis of events in structural health monitoring (SHM). In this sense, the identification of noisy data and outliers, as well as the management of data cleansing stages can be facilitated through the implementation of a preprocessing stage based on cross-correlation functions. Additionally, this work evidences an improvement in damage detection when the cross-correlation is included as part of the whole damage assessment approach. The proposed methodology is validated by processing data measurements from piezoelectric devices (PZT), which are used in a piezodiagnostics approach based on PCA and baseline modeling. Thus, the influence of cross-correlation analysis used in the preprocessing stage is evaluated for damage detection by means of statistical plots and self-organizing maps. Three laboratory specimens were used as test structures in order to demonstrate the validity of the methodology: (i) a carbon steel pipe section with leak and mass damage types, (ii) an aircraft wing specimen, and (iii) a blade of a commercial aircraft turbine, where damages are specified as mass-added. As the main concluding remark, the suitability of cross-correlation features combined with a PCA-based piezodiagnostic approach in order to achieve a more robust damage assessment algorithm is verified for SHM tasks.

  10. Search for Dark Matter Interactions using Ionization Yield in Liquid Xenon

    NASA Astrophysics Data System (ADS)

    Uvarov, Sergey

    Cosmological observations overwhelmingly support the existence of dark matter which constitutes 87% of the universe's total mass. Weakly Interacting Massive Particles (WIMPs) are a prime candidate for dark matter, and the Large Underground Xenon (LUX) experiment aims to a direct-detection of a WIMP-nucleon interaction. The LUX detector is a dual-phase xenon time-projection chamber housed 4,850 feet underground at Sanford Underground Research Facility in Lead, South Dakota. We present the ionization-only analysis of the LUX 2013 WIMP search data. In the 1.04 x 104 kg-days exposure, thirty events were observed out of the 24.8 expected from radioactive backgrounds. We employ a cut-and-count method to set a 1-sided 90% C.L. upper limit for spin-independent WIMP-nucleon cross-sections. A zero charge yield for nuclear-recoils below 0.7 keV is included upper limit calculation. This ionization-only analysis excludes an unexplored region of WIMP-nucleon cross-section for low-mass WIMPs achieving 1.56 x 10-43 cm2 WIMP-nucleon cross-section exclusion for a 5.1 GeV/ c2 WIMP.

  11. Observation of enhanced zero-degree binary encounter electron production with decreasing charge-state q in 30 MeV O{sup q+} + O{sub 2} collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zouros, T.J.M.; Wong, K.L.; Hidmi, H.I.

    We have measured binary encounter electron production in collisions of 30 MeV O{sup q+} projectiles (q=4-8) and O{sub 2} targets. Measured double differential BEe cross-sections are found to increase with decreasing charge-state q, in agreement with similar previously reported zero-degree investigations for H{sub 2} and He targets. However, measurements for the same system but at 25{degrees} shows the opposite trend, that BEe cross sections decrease slightly with decreasing charge state.

  12. Zero-crossing detector with sub-microsecond jitter and crosstalk

    NASA Technical Reports Server (NTRS)

    Dick, G. John; Kuhnle, Paul F.; Sydnor, Richard L.

    1990-01-01

    A zero-crossing detector (ZCD) was built and tested with a new circuit design which gives reduced time jitter compared to previous designs. With the new design, time jitter is reduced for the first time to a value which approaches that due to noise in the input amplifying stage. Additionally, with fiber-optic transmission of the output signal, crosstalk between units has been eliminated. The measured values are in good agreement with circuit noise calculations and approximately ten times lower than that for ZCD's presently installed in the JPL test facility. Crosstalk between adjacent units was reduced even more than the jitter.

  13. Computing Determinants by Double-Crossing

    ERIC Educational Resources Information Center

    Leggett, Deanna; Perry, John; Torrence, Eve

    2011-01-01

    Dodgson's method of computing determinants is attractive, but fails if an interior entry of an intermediate matrix is zero. This paper reviews Dodgson's method and introduces a generalization, the double-crossing method, that provides a workaround for many interesting cases.

  14. Effective interaction of electroweak-interacting dark matter with Higgs boson and its phenomenology

    NASA Astrophysics Data System (ADS)

    Hisano, Junji; Kobayashi, Daiki; Mori, Naoya; Senaha, Eibun

    2015-03-01

    We study phenomenology of electroweak-interacting fermionic dark matter (DM) with a mass of O (100) GeV. Constructing the effective Lagrangian that describes the interactions between the Higgs boson and the SU (2)L isospin multiplet fermion, we evaluate the electric dipole moment (EDM) of electron, the signal strength of Higgs boson decay to two photons and the spin-independent elastic-scattering cross section with proton. As representative cases, we consider the SU (2)L triplet fermions with zero/nonzero hypercharges and SU (2)L doublet fermion. It is found that the electron EDM gives stringent constraints on those model parameter spaces. In the cases of the triplet fermion with zero hypercharge and the doublet fermion, the Higgs signal strength does not deviate from the standard model prediction by more than a few % once the current DM direct detection constraint is taken into account, even if the CP violation is suppressed. On the contrary, O (10- 20)% deviation may occur in the case of the triplet fermion with nonzero hypercharge. Our representative scenarios may be tested by the future experiments.

  15. Path-Integral Monte Carlo Determination of the Fourth-Order Virial Coefficient for a Unitary Two-Component Fermi Gas with Zero-Range Interactions

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2016-06-01

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b4 of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4 , our b4 agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions.

  16. Small massless excitations against a nontrivial background

    NASA Astrophysics Data System (ADS)

    Khariton, N. G.; Svetovoy, V. B.

    1994-03-01

    We propose a systematic approach for finding bosonic zero modes of nontrivial classical solutions in a gauge theory. The method allows us to find all the modes connected with the broken space-time and gauge symmetries. The ground state is supposed to be dependent on some space coordinates yα and independent of the rest of the coordinates xi. The main problem which is solved is how to construct the zero modes corresponding to the broken xiyα rotations in vacuum and which boundary conditions specify them. It is found that the rotational modes are typically singular at the origin or at infinity, but their energy remains finite. They behave as massless vector fields in x space. We analyze local and global symmetries affecting the zero modes. An algorithm for constructing the zero mode excitations is formulated. The main results are illustrated in the Abelian Higgs model with the string background.

  17. Experimental evaluation of the certification-trail method

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.

    1993-01-01

    Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.

  18. DARIS (Deformation Analysis Using Recursive Interferometric Systems) A New Algorithm for Displacement Measurements Though SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Redavid, Antonio; Bovenga, Fabio

    2010-03-01

    In the present work we describe a new and alternative repeat-pass interferometry algorithm designed and developed with the aim to: i) increase the robustness wrt to noise by increasing the number of differential interferograms and consequently the information redundancy; ii) guarantee high performances in the detection of non linear deformation without the need of specifying in input a particular cinematic model.The starting point is a previous paper [4] dedicated to the optimization of the InSAR coregistration by finding an ad hoc path between the images which minimizes the expected total decorrelation as in the SABS-like approaches [3]. The main difference wrt the PS-like algorithms [1],[2] is the use of couples of images which potentially can show high spatial coherence and, which are neglected by the standard PSI processing.The present work presents a detailed description of the algorithm processing steps as well as the results obtained by processing simulated InSAR data with the aim to evaluate the algorithm performances. Moreover, the algorithm has been also applied on a real test case in Poland, to study the subsidence affecting the Wieliczka Salt Mine. A cross validation wrt SPINUA PSI-like algorithm [5] has been carried out by comparing the resultant displacement fields.

  19. Implementation of an F-statistic all-sky search for continuous gravitational waves in Virgo VSR1 data

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ain, A.; Ajith, P.; Alemic, A.; Allen, B.; Allocca, A.; Amariutei, D.; Andersen, M.; Anderson, R.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Austin, L.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barbet, M.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Bauchrowitz, J.; Bauer, Th S.; Behnke, B.; Bejger, M.; Beker, M. G.; Belczynski, C.; Bell, A. S.; Bell, C.; Bergmann, G.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Beyersdorf, P. T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biscans, S.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bloemen, S.; Blom, M.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Borkowski, K.; Boschi, V.; Bose, Sukanta; Bosi, L.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Buchman, S.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burman, R.; Buskulic, D.; Buy, C.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castiglia, A.; Caudill, S.; Cavalier, F.; Cavalieri, R.; Celerier, C.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chao, S.; Charlton, P.; Chassande Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P. F.; Colla, A.; Collette, C.; Colombini, M.; Cominsky, L.; Conte, A.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corpuz, A.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coughlin, S.; Coulon, J. P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Dal Canton, T.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; Debreczeni, G.; Degallaix, J.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Donath, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorosh, O.; Dossa, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dwyer, S.; Eberle, T.; Edo, T.; Edwards, M.; Effler, A.; Eggenstein, H.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Endrőczi, G.; Essick, R.; Etzel, T.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fehrmann, H.; Fejer, M. M.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Fournier, J. D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gair, J.; Gammaitoni, L.; Gaonkar, S.; Garufi, F.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, C.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Gräf, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Groot, P.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hart, M.; Hartman, M. T.; Haster, C. J.; Haughian, K.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Hooper, S.; Hopkins, P.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Huerta, E.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Iyer, B. R.; Izumi, K.; Jacobson, M.; James, E.; Jang, H.; Jaranowski, P.; Ji, Y.; Jiménez Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karlen, J.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Keiser, G. M.; Keitel, D.; Kelley, D. B.; Kells, W.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, C.; Kim, K.; Kim, N.; Kim, N. G.; Kim, Y. M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Koehlenbeck, S.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kremin, A.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, A.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Kwee, P.; Landry, M.; Lantz, B.; Larson, S.; Lasky, P. D.; Lawrie, C.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, J.; Leonardi, M.; Leong, J. R.; Le Roux, A.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B.; Lewis, J.; Li, T. G. F.; Libbrecht, K.; Libson, A.; Lin, A. C.; Littenberg, T. B.; Litvine, V.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Loew, K.; Logue, J.; Lombardi, A. L.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M. J.; Lück, H.; Luijten, E.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macarthur, J.; Macdonald, E. P.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana Sandoval, F.; Mageswaran, M.; Maglione, C.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Manca, G. M.; Mandel, I.; Mandic, V.; Mangano, V.; Mangini, N.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Martinelli, L.; Martynov, D.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; McLin, K.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meinders, M.; Melatos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyers, P.; Miao, H.; Michel, C.; Mikhailov, E. E.; Milano, L.; Milde, S.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Moesta, P.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morriss, S. R.; Mossavi, K.; Mours, B.; Lowry, C. M. Mow; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nanda Kumar, D.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nelemans, G.; Neri, I.; Neri, M.; Newton, G.; Nguyen, T.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oppermann, P.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Padilla, C.; Pai, A.; Palashov, O.; Palomba, C.; Pan, H.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Paris, H.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poeld, J.; Poggiani, R.; Poteomkin, A.; Powell, J.; Prasad, J.; Premachandra, S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Qin, J.; Quetschke, V.; Quintero, E.; Quiroga, G.; Quitzow James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rajalakshmi, G.; Rakhmanov, M.; Ramet, C.; Ramirez, K.; Rapagnani, P.; Raymond, V.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Reid, S.; Reitze, D. H.; Rhoades, E.; Ricci, F.; Riles, K.; Robertson, N. A.; Robinet, F.; Rocchi, A.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sanders, J. R.; Sannibale, V.; Santiago Prieto, I.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R.; Scheuer, J.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Singh, R.; Sintes, A. M.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M.; Smith, R. J. E.; Smith Lefebvre, N. D.; Son, E. J.; Sorazu, B.; Souradeep, T.; Sperandio, L.; Staley, A.; Stebbins, J.; Steinlechner, J.; Steinlechner, S.; Stephens, B. C.; Steplewski, S.; Stevenson, S.; Stone, R.; Stops, D.; Strain, K. A.; Straniero, N.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Urbanek, K.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; vanden Brand, J. F. J.; VanDen Broeck, C.; vander Putten, S.; vander Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Verma, S. S.; Vetrano, F.; Viceré, A.; Finley, R. Vincent; Vinet, J. Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vyachanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Wang, M.; Wang, X.; Ward, R. L.; Was, M.; Weaver, B.; Wei, L. W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Williams, K.; Williams, L.; Williams, R.; Williams, T.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yancey, C. C.; Yang, H.; Yang, Z.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J. P.; Zhang, Fan; Zhang, L.; Zhao, C.; Zhu, X. J.; Zucker, M. E.; Zuraw, S.; Zweizig, J.

    2014-08-01

    We present an implementation of the F-statistic to carry out the first search in data from the Virgo laser interferometric gravitational wave detector for periodic gravitational waves from a priori unknown, isolated rotating neutron stars. We searched a frequency f0 range from 100 Hz to 1 kHz and the frequency dependent spindown f1 range from -1.6({{f}_{0}}/100\\;Hz)\\times {{10}^{-9}} Hz s-1 to zero. A large part of this frequency-spindown space was unexplored by any of the all-sky searches published so far. Our method consisted of a coherent search over two-day periods using the ℱ-statistic, followed by a search for coincidences among the candidates from the two-day segments. We have introduced a number of novel techniques and algorithms that allow the use of the fast Fourier transform (FFT) algorithm in the coherent part of the search resulting in a fifty-fold speed-up in computation of the F-statistic with respect to the algorithm used in the other pipelines. No significant gravitational wave signal was found. The sensitivity of the search was estimated by injecting signals into the data. In the most sensitive parts of the detector band more than 90% of signals would have been detected with dimensionless gravitational-wave amplitude greater than 5\\times {{10}^{-24}}.

  20. Highly sensitive surface-enhanced Raman scattering substrate made from superaligned carbon nanotubes.

    PubMed

    Sun, Yinghui; Liu, Kai; Miao, Jiao; Wang, Zheyao; Tian, Baozhong; Zhang, Lina; Li, Qunqing; Fan, Shoushan; Jiang, Kaili

    2010-05-12

    Surface-enhanced Raman scattering (SERS) has attracted wide attention because it can enhance normally weak Raman signal by several orders of magnitude and facilitate the sensitive detection of molecules. Conventional SERS substrates are constructed by placing metal nanoparticles on a planar surface. Here we show that, if the planar surface was substituted by a unique nanoporous surface, the enhancement effect can be dramatically improved. The nanoporous surface can be easily fabricated in batches and at low costs by cross stacking superaligned carbon nanotube films. The as-prepared transparent and freestanding SERS substrate is capable of detecting ambient trinitrotoluene vapor, showing much higher Raman enhancement than ordinary planar substrates because of the extremely large surface area and the unique zero-dimensional at one-dimensional nanostructure. These results not only provide a new approach to ultrasensitive SERS substrates, but also are helpful for improving the fundamental understanding of SERS phenomena.

Top